Robustness of Graph Neural Networks at Scale
- URL: http://arxiv.org/abs/2110.14038v4
- Date: Sun, 30 Apr 2023 08:59:57 GMT
- Title: Robustness of Graph Neural Networks at Scale
- Authors: Simon Geisler, Tobias Schmidt, Hakan \c{S}irin, Daniel Z\"ugner,
Aleksandar Bojchevski and Stephan G\"unnemann
- Abstract summary: We study how to attack and defend Graph Neural Networks (GNNs) at scale.
We propose two sparsity-aware first-order optimization attacks that maintain an efficient representation.
We show that common surrogate losses are not well-suited for global attacks on GNNs.
- Score: 63.45769413975601
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graph Neural Networks (GNNs) are increasingly important given their
popularity and the diversity of applications. Yet, existing studies of their
vulnerability to adversarial attacks rely on relatively small graphs. We
address this gap and study how to attack and defend GNNs at scale. We propose
two sparsity-aware first-order optimization attacks that maintain an efficient
representation despite optimizing over a number of parameters which is
quadratic in the number of nodes. We show that common surrogate losses are not
well-suited for global attacks on GNNs. Our alternatives can double the attack
strength. Moreover, to improve GNNs' reliability we design a robust aggregation
function, Soft Median, resulting in an effective defense at all scales. We
evaluate our attacks and defense with standard GNNs on graphs more than 100
times larger compared to previous work. We even scale one order of magnitude
further by extending our techniques to a scalable GNN.
Related papers
- GARNET: Reduced-Rank Topology Learning for Robust and Scalable Graph
Neural Networks [15.448462928073635]
Graph neural networks (GNNs) have been increasingly deployed in various applications that involve learning on non-Euclidean data.
Recent studies show that GNNs are vulnerable to graph adversarial attacks.
We propose GARNET, a scalable spectral method to boost the adversarial robustness of GNN models.
arXiv Detail & Related papers (2022-01-30T06:32:44Z) - CAP: Co-Adversarial Perturbation on Weights and Features for Improving
Generalization of Graph Neural Networks [59.692017490560275]
Adversarial training has been widely demonstrated to improve model's robustness against adversarial attacks.
It remains unclear how the adversarial training could improve the generalization abilities of GNNs in the graph analytics problem.
We construct the co-adversarial perturbation (CAP) optimization problem in terms of weights and features, and design the alternating adversarial perturbation algorithm to flatten the weight and feature loss landscapes alternately.
arXiv Detail & Related papers (2021-10-28T02:28:13Z) - A Hard Label Black-box Adversarial Attack Against Graph Neural Networks [25.081630882605985]
We conduct a systematic study on adversarial attacks against GNNs for graph classification via perturbing the graph structure.
We formulate our attack as an optimization problem, whose objective is to minimize the number of edges to be perturbed in a graph while maintaining the high attack success rate.
Our experimental results on three real-world datasets demonstrate that our attack can effectively attack representative GNNs for graph classification with less queries and perturbations.
arXiv Detail & Related papers (2021-08-21T14:01:34Z) - Adversarial Attack on Large Scale Graph [58.741365277995044]
Recent studies have shown that graph neural networks (GNNs) are vulnerable against perturbations due to lack of robustness.
Currently, most works on attacking GNNs are mainly using gradient information to guide the attack and achieve outstanding performance.
We argue that the main reason is that they have to use the whole graph for attacks, resulting in the increasing time and space complexity as the data scale grows.
We present a practical metric named Degree Assortativity Change (DAC) to measure the impacts of adversarial attacks on graph data.
arXiv Detail & Related papers (2020-09-08T02:17:55Z) - Efficient, Direct, and Restricted Black-Box Graph Evasion Attacks to
Any-Layer Graph Neural Networks via Influence Function [62.89388227354517]
Graph neural network (GNN), the mainstream method to learn on graph data, is vulnerable to graph evasion attacks.
Existing work has at least one of the following drawbacks: 1) limited to directly attack two-layer GNNs; 2) inefficient; and 3) impractical, as they need to know full or part of GNN model parameters.
We propose an influence-based emphefficient, direct, and restricted black-box evasion attack to emphany-layer GNNs.
arXiv Detail & Related papers (2020-09-01T03:24:51Z) - GNNGuard: Defending Graph Neural Networks against Adversarial Attacks [16.941548115261433]
We develop GNNGuard, an algorithm to defend against a variety of training-time attacks that perturb the discrete graph structure.
GNNGuard learns how to best assign higher weights to edges connecting similar nodes while pruning edges between unrelated nodes.
Experiments show that GNNGuard outperforms existing defense approaches by 15.3% on average.
arXiv Detail & Related papers (2020-06-15T06:07:46Z) - Graph Structure Learning for Robust Graph Neural Networks [63.04935468644495]
Graph Neural Networks (GNNs) are powerful tools in representation learning for graphs.
Recent studies show that GNNs are vulnerable to carefully-crafted perturbations, called adversarial attacks.
We propose a general framework Pro-GNN, which can jointly learn a structural graph and a robust graph neural network model.
arXiv Detail & Related papers (2020-05-20T17:07:05Z) - Adversarial Attacks and Defenses on Graphs: A Review, A Tool and
Empirical Studies [73.39668293190019]
Adversary attacks can be easily fooled by small perturbation on the input.
Graph Neural Networks (GNNs) have been demonstrated to inherit this vulnerability.
In this survey, we categorize existing attacks and defenses, and review the corresponding state-of-the-art methods.
arXiv Detail & Related papers (2020-03-02T04:32:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.