HC-Ref: Hierarchical Constrained Refinement for Robust Adversarial
Training of GNNs
- URL: http://arxiv.org/abs/2312.04879v1
- Date: Fri, 8 Dec 2023 07:32:56 GMT
- Title: HC-Ref: Hierarchical Constrained Refinement for Robust Adversarial
Training of GNNs
- Authors: Xiaobing Pei, Haoran Yang, and Gang Shen
- Abstract summary: Adversarial training, which has been shown to be one of the most effective defense mechanisms against adversarial attacks in computer vision, holds great promise for enhancing the robustness of GNNs.
We propose a hierarchical constraint refinement framework (HC-Ref) that enhances the anti-perturbation capabilities of GNNs and downstream classifiers separately.
- Score: 7.635985143883581
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent studies have shown that attackers can catastrophically reduce the
performance of GNNs by maliciously modifying the graph structure or node
features on the graph. Adversarial training, which has been shown to be one of
the most effective defense mechanisms against adversarial attacks in computer
vision, holds great promise for enhancing the robustness of GNNs. There is
limited research on defending against attacks by performing adversarial
training on graphs, and it is crucial to delve deeper into this approach to
optimize its effectiveness. Therefore, based on robust adversarial training on
graphs, we propose a hierarchical constraint refinement framework (HC-Ref) that
enhances the anti-perturbation capabilities of GNNs and downstream classifiers
separately, ultimately leading to improved robustness. We propose corresponding
adversarial regularization terms that are conducive to adaptively narrowing the
domain gap between the normal part and the perturbation part according to the
characteristics of different layers, promoting the smoothness of the predicted
distribution of both parts. Moreover, existing research on graph robust
adversarial training primarily concentrates on training from the standpoint of
node feature perturbations and seldom takes into account alterations in the
graph structure. This limitation makes it challenging to prevent attacks based
on topological changes in the graph. This paper generates adversarial examples
by utilizing graph structure perturbations, offering an effective approach to
defend against attack methods that are based on topological changes. Extensive
experiments on two real-world graph benchmarks show that HC-Ref successfully
resists various attacks and has better node classification performance compared
to several baseline methods.
Related papers
- Top K Enhanced Reinforcement Learning Attacks on Heterogeneous Graph Node Classification [1.4943280454145231]
Graph Neural Networks (GNNs) have attracted substantial interest due to their exceptional performance on graph-based data.
Their robustness, especially on heterogeneous graphs, remains underexplored, particularly against adversarial attacks.
This paper proposes HeteroKRLAttack, a targeted evasion black-box attack method for heterogeneous graphs.
arXiv Detail & Related papers (2024-08-04T08:44:00Z) - Graph Transductive Defense: a Two-Stage Defense for Graph Membership Inference Attacks [50.19590901147213]
Graph neural networks (GNNs) have become instrumental in diverse real-world applications, offering powerful graph learning capabilities.
GNNs are vulnerable to adversarial attacks, including membership inference attacks (MIA)
This paper proposes an effective two-stage defense, Graph Transductive Defense (GTD), tailored to graph transductive learning characteristics.
arXiv Detail & Related papers (2024-06-12T06:36:37Z) - HGAttack: Transferable Heterogeneous Graph Adversarial Attack [63.35560741500611]
Heterogeneous Graph Neural Networks (HGNNs) are increasingly recognized for their performance in areas like the web and e-commerce.
This paper introduces HGAttack, the first dedicated gray box evasion attack method for heterogeneous graphs.
arXiv Detail & Related papers (2024-01-18T12:47:13Z) - Sparse but Strong: Crafting Adversarially Robust Graph Lottery Tickets [3.325501850627077]
Graph Lottery Tickets (GLTs) can significantly reduce the inference latency and compute footprint compared to their dense counterparts.
Despite these benefits, their performance against adversarial structure perturbations remains to be fully explored.
We present an adversarially robust graph sparsification framework that prunes the adjacency matrix and the GNN weights.
arXiv Detail & Related papers (2023-12-11T17:52:46Z) - Everything Perturbed All at Once: Enabling Differentiable Graph Attacks [61.61327182050706]
Graph neural networks (GNNs) have been shown to be vulnerable to adversarial attacks.
We propose a novel attack method called Differentiable Graph Attack (DGA) to efficiently generate effective attacks.
Compared to the state-of-the-art, DGA achieves nearly equivalent attack performance with 6 times less training time and 11 times smaller GPU memory footprint.
arXiv Detail & Related papers (2023-08-29T20:14:42Z) - Resisting Graph Adversarial Attack via Cooperative Homophilous
Augmentation [60.50994154879244]
Recent studies show that Graph Neural Networks are vulnerable and easily fooled by small perturbations.
In this work, we focus on the emerging but critical attack, namely, Graph Injection Attack.
We propose a general defense framework CHAGNN against GIA through cooperative homophilous augmentation of graph data and model.
arXiv Detail & Related papers (2022-11-15T11:44:31Z) - CAP: Co-Adversarial Perturbation on Weights and Features for Improving
Generalization of Graph Neural Networks [59.692017490560275]
Adversarial training has been widely demonstrated to improve model's robustness against adversarial attacks.
It remains unclear how the adversarial training could improve the generalization abilities of GNNs in the graph analytics problem.
We construct the co-adversarial perturbation (CAP) optimization problem in terms of weights and features, and design the alternating adversarial perturbation algorithm to flatten the weight and feature loss landscapes alternately.
arXiv Detail & Related papers (2021-10-28T02:28:13Z) - Uncertainty-Matching Graph Neural Networks to Defend Against Poisoning
Attacks [43.60973654460398]
Graph Neural Networks (GNNs) are generalizations of neural networks to graph-structured data.
GNNs are vulnerable to adversarial attacks, i.e., a small perturbation to the structure can lead to a non-trivial performance degradation.
We propose Uncertainty Matching GNN (UM-GNN), that is aimed at improving the robustness of GNN models.
arXiv Detail & Related papers (2020-09-30T05:29:42Z) - Graph Backdoor [53.70971502299977]
We present GTA, the first backdoor attack on graph neural networks (GNNs)
GTA departs in significant ways: it defines triggers as specific subgraphs, including both topological structures and descriptive features.
It can be instantiated for both transductive (e.g., node classification) and inductive (e.g., graph classification) tasks.
arXiv Detail & Related papers (2020-06-21T19:45:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.