Are Defenses for Graph Neural Networks Robust?
- URL: http://arxiv.org/abs/2301.13694v1
- Date: Tue, 31 Jan 2023 15:11:48 GMT
- Title: Are Defenses for Graph Neural Networks Robust?
- Authors: Felix Mujkanovic, Simon Geisler, Stephan G\"unnemann, Aleksandar
Bojchevski
- Abstract summary: We show that most Graph Neural Networks (GNNs) defenses show no or only marginal improvement compared to an undefended baseline.
We advocate using custom adaptive attacks as a gold standard and we outline the lessons we learned from successfully designing such attacks.
Our diverse collection of perturbed graphs forms a (black-box) unit test offering a first glance at a model's robustness.
- Score: 72.1389952286628
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A cursory reading of the literature suggests that we have made a lot of
progress in designing effective adversarial defenses for Graph Neural Networks
(GNNs). Yet, the standard methodology has a serious flaw - virtually all of the
defenses are evaluated against non-adaptive attacks leading to overly
optimistic robustness estimates. We perform a thorough robustness analysis of 7
of the most popular defenses spanning the entire spectrum of strategies, i.e.,
aimed at improving the graph, the architecture, or the training. The results
are sobering - most defenses show no or only marginal improvement compared to
an undefended baseline. We advocate using custom adaptive attacks as a gold
standard and we outline the lessons we learned from successfully designing such
attacks. Moreover, our diverse collection of perturbed graphs forms a
(black-box) unit test offering a first glance at a model's robustness.
Related papers
- RIDA: A Robust Attack Framework on Incomplete Graphs [19.257308956424207]
We introduce the Robust Incomplete Deep Attack Framework (RIDA)
RIDA is the first algorithm for robust gray-box poisoning attacks on incomplete graphs.
Extensive tests against 9 SOTA baselines on 3 real-world datasets demonstrate RIDA's superiority in handling incompleteness and high attack performance on the incomplete graph.
arXiv Detail & Related papers (2024-07-25T16:33:35Z) - Everything Perturbed All at Once: Enabling Differentiable Graph Attacks [61.61327182050706]
Graph neural networks (GNNs) have been shown to be vulnerable to adversarial attacks.
We propose a novel attack method called Differentiable Graph Attack (DGA) to efficiently generate effective attacks.
Compared to the state-of-the-art, DGA achieves nearly equivalent attack performance with 6 times less training time and 11 times smaller GPU memory footprint.
arXiv Detail & Related papers (2023-08-29T20:14:42Z) - IDEA: Invariant Defense for Graph Adversarial Robustness [60.0126873387533]
We propose an Invariant causal DEfense method against adversarial Attacks (IDEA)
We derive node-based and structure-based invariance objectives from an information-theoretic perspective.
Experiments demonstrate that IDEA attains state-of-the-art defense performance under all five attacks on all five datasets.
arXiv Detail & Related papers (2023-05-25T07:16:00Z) - Robustness of Graph Neural Networks at Scale [63.45769413975601]
We study how to attack and defend Graph Neural Networks (GNNs) at scale.
We propose two sparsity-aware first-order optimization attacks that maintain an efficient representation.
We show that common surrogate losses are not well-suited for global attacks on GNNs.
arXiv Detail & Related papers (2021-10-26T21:31:17Z) - A Hard Label Black-box Adversarial Attack Against Graph Neural Networks [25.081630882605985]
We conduct a systematic study on adversarial attacks against GNNs for graph classification via perturbing the graph structure.
We formulate our attack as an optimization problem, whose objective is to minimize the number of edges to be perturbed in a graph while maintaining the high attack success rate.
Our experimental results on three real-world datasets demonstrate that our attack can effectively attack representative GNNs for graph classification with less queries and perturbations.
arXiv Detail & Related papers (2021-08-21T14:01:34Z) - Black-box Gradient Attack on Graph Neural Networks: Deeper Insights in
Graph-based Attack and Defense [3.3504365823045035]
Graph Neural Networks (GNNs) have received significant attention due to their state-of-the-art performance on various graph representation learning tasks.
Recent studies reveal that GNNs are vulnerable to adversarial attacks, i.e. an attacker is able to fool the GNNs by perturbing the graph structure or node features deliberately.
Most existing attacking algorithms require access to either the model parameters or the training data, which is not practical in the real world.
arXiv Detail & Related papers (2021-04-30T15:30:47Z) - Attack Agnostic Adversarial Defense via Visual Imperceptible Bound [70.72413095698961]
This research aims to design a defense model that is robust within a certain bound against both seen and unseen adversarial attacks.
The proposed defense model is evaluated on the MNIST, CIFAR-10, and Tiny ImageNet databases.
The proposed algorithm is attack agnostic, i.e. it does not require any knowledge of the attack algorithm.
arXiv Detail & Related papers (2020-10-25T23:14:26Z) - DefenseVGAE: Defending against Adversarial Attacks on Graph Data via a
Variational Graph Autoencoder [22.754141951413786]
Graph neural networks (GNNs) achieve remarkable performance for tasks on graph data.
Recent works show they are extremely vulnerable to adversarial structural perturbations, making their outcomes unreliable.
We propose DefenseVGAE, a novel framework leveraging variational graph autoencoders(VGAEs) to defend GNNs against such attacks.
arXiv Detail & Related papers (2020-06-16T03:30:23Z) - Graph Structure Learning for Robust Graph Neural Networks [63.04935468644495]
Graph Neural Networks (GNNs) are powerful tools in representation learning for graphs.
Recent studies show that GNNs are vulnerable to carefully-crafted perturbations, called adversarial attacks.
We propose a general framework Pro-GNN, which can jointly learn a structural graph and a robust graph neural network model.
arXiv Detail & Related papers (2020-05-20T17:07:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.