Everything Perturbed All at Once: Enabling Differentiable Graph Attacks
- URL: http://arxiv.org/abs/2308.15614v1
- Date: Tue, 29 Aug 2023 20:14:42 GMT
- Title: Everything Perturbed All at Once: Enabling Differentiable Graph Attacks
- Authors: Haoran Liu, Bokun Wang, Jianling Wang, Xiangjue Dong, Tianbao Yang,
James Caverlee
- Abstract summary: Graph neural networks (GNNs) have been shown to be vulnerable to adversarial attacks.
We propose a novel attack method called Differentiable Graph Attack (DGA) to efficiently generate effective attacks.
Compared to the state-of-the-art, DGA achieves nearly equivalent attack performance with 6 times less training time and 11 times smaller GPU memory footprint.
- Score: 61.61327182050706
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As powerful tools for representation learning on graphs, graph neural
networks (GNNs) have played an important role in applications including social
networks, recommendation systems, and online web services. However, GNNs have
been shown to be vulnerable to adversarial attacks, which can significantly
degrade their effectiveness. Recent state-of-the-art approaches in adversarial
attacks rely on gradient-based meta-learning to selectively perturb a single
edge with the highest attack score until they reach the budget constraint.
While effective in identifying vulnerable links, these methods are plagued by
high computational costs. By leveraging continuous relaxation and
parameterization of the graph structure, we propose a novel attack method
called Differentiable Graph Attack (DGA) to efficiently generate effective
attacks and meanwhile eliminate the need for costly retraining. Compared to the
state-of-the-art, DGA achieves nearly equivalent attack performance with 6
times less training time and 11 times smaller GPU memory footprint on different
benchmark datasets. Additionally, we provide extensive experimental analyses of
the transferability of the DGA among different graph models, as well as its
robustness against widely-used defense mechanisms.
Related papers
- Top K Enhanced Reinforcement Learning Attacks on Heterogeneous Graph Node Classification [1.4943280454145231]
Graph Neural Networks (GNNs) have attracted substantial interest due to their exceptional performance on graph-based data.
Their robustness, especially on heterogeneous graphs, remains underexplored, particularly against adversarial attacks.
This paper proposes HeteroKRLAttack, a targeted evasion black-box attack method for heterogeneous graphs.
arXiv Detail & Related papers (2024-08-04T08:44:00Z) - On the Robustness of Graph Reduction Against GNN Backdoor [9.377257547233919]
Graph Neural Networks (GNNs) are gaining popularity across various domains due to their effectiveness in learning graph-structured data.
backdoor poisoning attacks pose serious threats to real-world applications.
graph reduction techniques, including coarsening and sparsification, have emerged as effective methods for accelerating GNN training on large-scale graphs.
arXiv Detail & Related papers (2024-07-02T17:08:38Z) - Graph Transductive Defense: a Two-Stage Defense for Graph Membership Inference Attacks [50.19590901147213]
Graph neural networks (GNNs) have become instrumental in diverse real-world applications, offering powerful graph learning capabilities.
GNNs are vulnerable to adversarial attacks, including membership inference attacks (MIA)
This paper proposes an effective two-stage defense, Graph Transductive Defense (GTD), tailored to graph transductive learning characteristics.
arXiv Detail & Related papers (2024-06-12T06:36:37Z) - HGAttack: Transferable Heterogeneous Graph Adversarial Attack [63.35560741500611]
Heterogeneous Graph Neural Networks (HGNNs) are increasingly recognized for their performance in areas like the web and e-commerce.
This paper introduces HGAttack, the first dedicated gray box evasion attack method for heterogeneous graphs.
arXiv Detail & Related papers (2024-01-18T12:47:13Z) - HC-Ref: Hierarchical Constrained Refinement for Robust Adversarial
Training of GNNs [7.635985143883581]
Adversarial training, which has been shown to be one of the most effective defense mechanisms against adversarial attacks in computer vision, holds great promise for enhancing the robustness of GNNs.
We propose a hierarchical constraint refinement framework (HC-Ref) that enhances the anti-perturbation capabilities of GNNs and downstream classifiers separately.
arXiv Detail & Related papers (2023-12-08T07:32:56Z) - Resisting Graph Adversarial Attack via Cooperative Homophilous
Augmentation [60.50994154879244]
Recent studies show that Graph Neural Networks are vulnerable and easily fooled by small perturbations.
In this work, we focus on the emerging but critical attack, namely, Graph Injection Attack.
We propose a general defense framework CHAGNN against GIA through cooperative homophilous augmentation of graph data and model.
arXiv Detail & Related papers (2022-11-15T11:44:31Z) - Adversarial Attack on Large Scale Graph [58.741365277995044]
Recent studies have shown that graph neural networks (GNNs) are vulnerable against perturbations due to lack of robustness.
Currently, most works on attacking GNNs are mainly using gradient information to guide the attack and achieve outstanding performance.
We argue that the main reason is that they have to use the whole graph for attacks, resulting in the increasing time and space complexity as the data scale grows.
We present a practical metric named Degree Assortativity Change (DAC) to measure the impacts of adversarial attacks on graph data.
arXiv Detail & Related papers (2020-09-08T02:17:55Z) - Graph Structure Learning for Robust Graph Neural Networks [63.04935468644495]
Graph Neural Networks (GNNs) are powerful tools in representation learning for graphs.
Recent studies show that GNNs are vulnerable to carefully-crafted perturbations, called adversarial attacks.
We propose a general framework Pro-GNN, which can jointly learn a structural graph and a robust graph neural network model.
arXiv Detail & Related papers (2020-05-20T17:07:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.