Sparse but Strong: Crafting Adversarially Robust Graph Lottery Tickets
- URL: http://arxiv.org/abs/2312.06568v1
- Date: Mon, 11 Dec 2023 17:52:46 GMT
- Title: Sparse but Strong: Crafting Adversarially Robust Graph Lottery Tickets
- Authors: Subhajit Dutta Chowdhury, Zhiyu Ni, Qingyuan Peng, Souvik Kundu,
Pierluigi Nuzzo
- Abstract summary: Graph Lottery Tickets (GLTs) can significantly reduce the inference latency and compute footprint compared to their dense counterparts.
Despite these benefits, their performance against adversarial structure perturbations remains to be fully explored.
We present an adversarially robust graph sparsification framework that prunes the adjacency matrix and the GNN weights.
- Score: 3.325501850627077
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graph Lottery Tickets (GLTs), comprising a sparse adjacency matrix and a
sparse graph neural network (GNN), can significantly reduce the inference
latency and compute footprint compared to their dense counterparts. Despite
these benefits, their performance against adversarial structure perturbations
remains to be fully explored. In this work, we first investigate the resilience
of GLTs against different structure perturbation attacks and observe that they
are highly vulnerable and show a large drop in classification accuracy. Based
on this observation, we then present an adversarially robust graph
sparsification (ARGS) framework that prunes the adjacency matrix and the GNN
weights by optimizing a novel loss function capturing the graph homophily
property and information associated with both the true labels of the train
nodes and the pseudo labels of the test nodes. By iteratively applying ARGS to
prune both the perturbed graph adjacency matrix and the GNN model weights, we
can find adversarially robust graph lottery tickets that are highly sparse yet
achieve competitive performance under different untargeted training-time
structure attacks. Evaluations conducted on various benchmarks, considering
different poisoning structure attacks, namely, PGD, MetaAttack, Meta-PGD, and
PR-BCD demonstrate that the GLTs generated by ARGS can significantly improve
the robustness, even when subjected to high levels of sparsity.
Related papers
- Self-Guided Robust Graph Structure Refinement [37.235898707554284]
We propose a self-guided graph structure refinement (GSR) framework to defend GNNs against adversarial attacks.
In this paper, we demonstrate the effectiveness of SG-GSR under various scenarios including non-targeted attacks, targeted attacks, feature attacks, e-commerce fraud, and noisy node labels.
arXiv Detail & Related papers (2024-02-19T05:00:07Z) - HGAttack: Transferable Heterogeneous Graph Adversarial Attack [63.35560741500611]
Heterogeneous Graph Neural Networks (HGNNs) are increasingly recognized for their performance in areas like the web and e-commerce.
This paper introduces HGAttack, the first dedicated gray box evasion attack method for heterogeneous graphs.
arXiv Detail & Related papers (2024-01-18T12:47:13Z) - HC-Ref: Hierarchical Constrained Refinement for Robust Adversarial
Training of GNNs [7.635985143883581]
Adversarial training, which has been shown to be one of the most effective defense mechanisms against adversarial attacks in computer vision, holds great promise for enhancing the robustness of GNNs.
We propose a hierarchical constraint refinement framework (HC-Ref) that enhances the anti-perturbation capabilities of GNNs and downstream classifiers separately.
arXiv Detail & Related papers (2023-12-08T07:32:56Z) - Everything Perturbed All at Once: Enabling Differentiable Graph Attacks [61.61327182050706]
Graph neural networks (GNNs) have been shown to be vulnerable to adversarial attacks.
We propose a novel attack method called Differentiable Graph Attack (DGA) to efficiently generate effective attacks.
Compared to the state-of-the-art, DGA achieves nearly equivalent attack performance with 6 times less training time and 11 times smaller GPU memory footprint.
arXiv Detail & Related papers (2023-08-29T20:14:42Z) - Similarity Preserving Adversarial Graph Contrastive Learning [5.671825576834061]
We propose a similarity-preserving adversarial graph contrastive learning framework.
In this paper, we show that SP-AGCL achieves a competitive performance on several downstream tasks.
arXiv Detail & Related papers (2023-06-24T04:02:50Z) - Resisting Graph Adversarial Attack via Cooperative Homophilous
Augmentation [60.50994154879244]
Recent studies show that Graph Neural Networks are vulnerable and easily fooled by small perturbations.
In this work, we focus on the emerging but critical attack, namely, Graph Injection Attack.
We propose a general defense framework CHAGNN against GIA through cooperative homophilous augmentation of graph data and model.
arXiv Detail & Related papers (2022-11-15T11:44:31Z) - Reliable Representations Make A Stronger Defender: Unsupervised
Structure Refinement for Robust GNN [36.045702771828736]
Graph Neural Networks (GNNs) have been successful on flourish tasks over graph data.
Recent studies have shown that attackers can catastrophically degrade the performance of GNNs by maliciously modifying the graph structure.
We propose an unsupervised pipeline, named STABLE, to optimize the graph structure.
arXiv Detail & Related papers (2022-06-30T10:02:32Z) - Uncertainty-Matching Graph Neural Networks to Defend Against Poisoning
Attacks [43.60973654460398]
Graph Neural Networks (GNNs) are generalizations of neural networks to graph-structured data.
GNNs are vulnerable to adversarial attacks, i.e., a small perturbation to the structure can lead to a non-trivial performance degradation.
We propose Uncertainty Matching GNN (UM-GNN), that is aimed at improving the robustness of GNN models.
arXiv Detail & Related papers (2020-09-30T05:29:42Z) - Adversarial Attack on Large Scale Graph [58.741365277995044]
Recent studies have shown that graph neural networks (GNNs) are vulnerable against perturbations due to lack of robustness.
Currently, most works on attacking GNNs are mainly using gradient information to guide the attack and achieve outstanding performance.
We argue that the main reason is that they have to use the whole graph for attacks, resulting in the increasing time and space complexity as the data scale grows.
We present a practical metric named Degree Assortativity Change (DAC) to measure the impacts of adversarial attacks on graph data.
arXiv Detail & Related papers (2020-09-08T02:17:55Z) - Graph Backdoor [53.70971502299977]
We present GTA, the first backdoor attack on graph neural networks (GNNs)
GTA departs in significant ways: it defines triggers as specific subgraphs, including both topological structures and descriptive features.
It can be instantiated for both transductive (e.g., node classification) and inductive (e.g., graph classification) tasks.
arXiv Detail & Related papers (2020-06-21T19:45:30Z) - Graph Structure Learning for Robust Graph Neural Networks [63.04935468644495]
Graph Neural Networks (GNNs) are powerful tools in representation learning for graphs.
Recent studies show that GNNs are vulnerable to carefully-crafted perturbations, called adversarial attacks.
We propose a general framework Pro-GNN, which can jointly learn a structural graph and a robust graph neural network model.
arXiv Detail & Related papers (2020-05-20T17:07:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.