Spatio-Temporal Sparsification for General Robust Graph Convolution
Networks
- URL: http://arxiv.org/abs/2103.12256v1
- Date: Tue, 23 Mar 2021 02:03:11 GMT
- Title: Spatio-Temporal Sparsification for General Robust Graph Convolution
Networks
- Authors: Mingming Lu, Ya Zhang
- Abstract summary: Graph Neural Networks (GNNs) have attracted increasing attention due to its successful applications on various graph-structure data.
Recent studies have shown that adversarial attacks are threatening the functionality of GNNs.
We propose to defend adversarial attacks on GNN through applying the Spatio-Temporal sparsification (called ST-Sparse) on the GNN hidden node representation.
- Score: 16.579675313683627
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Graph Neural Networks (GNNs) have attracted increasing attention due to its
successful applications on various graph-structure data. However, recent
studies have shown that adversarial attacks are threatening the functionality
of GNNs. Although numerous works have been proposed to defend adversarial
attacks from various perspectives, most of them can be robust against the
attacks only on specific scenarios. To address this shortage of robust
generalization, we propose to defend the adversarial attacks on GNN through
applying the Spatio-Temporal sparsification (called ST-Sparse) on the GNN
hidden node representation. ST-Sparse is similar to the Dropout regularization
in spirit. Through intensive experiment evaluation with GCN as the target GNN
model, we identify the benefits of ST-Sparse as follows: (1) ST-Sparse shows
the defense performance improvement in most cases, as it can effectively
increase the robust accuracy by up to 6\% improvement; (2) ST-Sparse
illustrates its robust generalization capability by integrating with the
existing defense methods, similar to the integration of Dropout into various
deep learning models as a standard regularization technique; (3) ST-Sparse also
shows its ordinary generalization capability on clean datasets, in that
ST-SparseGCN (the integration of ST-Sparse and the original GCN) even
outperform the original GCN, while the other three representative defense
methods are inferior to the original GCN.
Related papers
- HGAttack: Transferable Heterogeneous Graph Adversarial Attack [63.35560741500611]
Heterogeneous Graph Neural Networks (HGNNs) are increasingly recognized for their performance in areas like the web and e-commerce.
This paper introduces HGAttack, the first dedicated gray box evasion attack method for heterogeneous graphs.
arXiv Detail & Related papers (2024-01-18T12:47:13Z) - Graph Agent Network: Empowering Nodes with Inference Capabilities for Adversarial Resilience [50.460555688927826]
We propose the Graph Agent Network (GAgN) to address the vulnerabilities of graph neural networks (GNNs)
GAgN is a graph-structured agent network in which each node is designed as an 1-hop-view agent.
Agents' limited view prevents malicious messages from propagating globally in GAgN, thereby resisting global-optimization-based secondary attacks.
arXiv Detail & Related papers (2023-06-12T07:27:31Z) - Robust Mid-Pass Filtering Graph Convolutional Networks [47.50194731200042]
Graph convolutional networks (GCNs) are currently the most promising paradigm for dealing with graph-structure data.
Recent studies have also shown that GCNs are vulnerable to adversarial attacks.
We propose a simple yet effective Mid-pass filter GCN (Mid-GCN) to overcome these challenges.
arXiv Detail & Related papers (2023-02-16T03:07:09Z) - GUARD: Graph Universal Adversarial Defense [54.81496179947696]
We present a simple yet effective method, named Graph Universal Adversarial Defense (GUARD)
GUARD protects each individual node from attacks with a universal defensive patch, which is generated once and can be applied to any node in a graph.
GUARD significantly improves robustness for several established GCNs against multiple adversarial attacks and outperforms state-of-the-art defense methods by large margins.
arXiv Detail & Related papers (2022-04-20T22:18:12Z) - CAP: Co-Adversarial Perturbation on Weights and Features for Improving
Generalization of Graph Neural Networks [59.692017490560275]
Adversarial training has been widely demonstrated to improve model's robustness against adversarial attacks.
It remains unclear how the adversarial training could improve the generalization abilities of GNNs in the graph analytics problem.
We construct the co-adversarial perturbation (CAP) optimization problem in terms of weights and features, and design the alternating adversarial perturbation algorithm to flatten the weight and feature loss landscapes alternately.
arXiv Detail & Related papers (2021-10-28T02:28:13Z) - Robustness of Graph Neural Networks at Scale [63.45769413975601]
We study how to attack and defend Graph Neural Networks (GNNs) at scale.
We propose two sparsity-aware first-order optimization attacks that maintain an efficient representation.
We show that common surrogate losses are not well-suited for global attacks on GNNs.
arXiv Detail & Related papers (2021-10-26T21:31:17Z) - Uncertainty-Matching Graph Neural Networks to Defend Against Poisoning
Attacks [43.60973654460398]
Graph Neural Networks (GNNs) are generalizations of neural networks to graph-structured data.
GNNs are vulnerable to adversarial attacks, i.e., a small perturbation to the structure can lead to a non-trivial performance degradation.
We propose Uncertainty Matching GNN (UM-GNN), that is aimed at improving the robustness of GNN models.
arXiv Detail & Related papers (2020-09-30T05:29:42Z) - GNNGuard: Defending Graph Neural Networks against Adversarial Attacks [16.941548115261433]
We develop GNNGuard, an algorithm to defend against a variety of training-time attacks that perturb the discrete graph structure.
GNNGuard learns how to best assign higher weights to edges connecting similar nodes while pruning edges between unrelated nodes.
Experiments show that GNNGuard outperforms existing defense approaches by 15.3% on average.
arXiv Detail & Related papers (2020-06-15T06:07:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.