Bandits for Structure Perturbation-based Black-box Attacks to Graph
Neural Networks with Theoretical Guarantees
- URL: http://arxiv.org/abs/2205.03546v1
- Date: Sat, 7 May 2022 04:17:25 GMT
- Title: Bandits for Structure Perturbation-based Black-box Attacks to Graph
Neural Networks with Theoretical Guarantees
- Authors: Binghui Wang, Youqi Li, and Pan Zhou
- Abstract summary: Graph neural networks (GNNs) have achieved state-of-the-art performance in many graph-based tasks.
An attacker can mislead GNN models by slightly perturbing the graph structure.
In this paper, we consider black-box attacks to GNNs with structure perturbation as well as with theoretical guarantees.
- Score: 60.61846004535707
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graph neural networks (GNNs) have achieved state-of-the-art performance in
many graph-based tasks such as node classification and graph classification.
However, many recent works have demonstrated that an attacker can mislead GNN
models by slightly perturbing the graph structure. Existing attacks to GNNs are
either under the less practical threat model where the attacker is assumed to
access the GNN model parameters, or under the practical black-box threat model
but consider perturbing node features that are shown to be not enough
effective. In this paper, we aim to bridge this gap and consider black-box
attacks to GNNs with structure perturbation as well as with theoretical
guarantees. We propose to address this challenge through bandit techniques.
Specifically, we formulate our attack as an online optimization with bandit
feedback. This original problem is essentially NP-hard due to the fact that
perturbing the graph structure is a binary optimization problem. We then
propose an online attack based on bandit optimization which is proven to be
{sublinear} to the query number $T$, i.e., $\mathcal{O}(\sqrt{N}T^{3/4})$ where
$N$ is the number of nodes in the graph. Finally, we evaluate our proposed
attack by conducting experiments over multiple datasets and GNN models. The
experimental results on various citation graphs and image graphs show that our
attack is both effective and efficient. Source code is available
at~\url{https://github.com/Metaoblivion/Bandit_GNN_Attack}
Related papers
- Link Stealing Attacks Against Inductive Graph Neural Networks [60.931106032824275]
A graph neural network (GNN) is a type of neural network that is specifically designed to process graph-structured data.
Previous work has shown that transductive GNNs are vulnerable to a series of privacy attacks.
This paper conducts a comprehensive privacy analysis of inductive GNNs through the lens of link stealing attacks.
arXiv Detail & Related papers (2024-05-09T14:03:52Z) - A Hard Label Black-box Adversarial Attack Against Graph Neural Networks [25.081630882605985]
We conduct a systematic study on adversarial attacks against GNNs for graph classification via perturbing the graph structure.
We formulate our attack as an optimization problem, whose objective is to minimize the number of edges to be perturbed in a graph while maintaining the high attack success rate.
Our experimental results on three real-world datasets demonstrate that our attack can effectively attack representative GNNs for graph classification with less queries and perturbations.
arXiv Detail & Related papers (2021-08-21T14:01:34Z) - Jointly Attacking Graph Neural Network and its Explanations [50.231829335996814]
Graph Neural Networks (GNNs) have boosted the performance for many graph-related tasks.
Recent studies have shown that GNNs are highly vulnerable to adversarial attacks, where adversaries can mislead the GNNs' prediction by modifying graphs.
We propose a novel attack framework (GEAttack) which can attack both a GNN model and its explanations by simultaneously exploiting their vulnerabilities.
arXiv Detail & Related papers (2021-08-07T07:44:33Z) - Black-box Gradient Attack on Graph Neural Networks: Deeper Insights in
Graph-based Attack and Defense [3.3504365823045035]
Graph Neural Networks (GNNs) have received significant attention due to their state-of-the-art performance on various graph representation learning tasks.
Recent studies reveal that GNNs are vulnerable to adversarial attacks, i.e. an attacker is able to fool the GNNs by perturbing the graph structure or node features deliberately.
Most existing attacking algorithms require access to either the model parameters or the training data, which is not practical in the real world.
arXiv Detail & Related papers (2021-04-30T15:30:47Z) - Adversarial Attack on Large Scale Graph [58.741365277995044]
Recent studies have shown that graph neural networks (GNNs) are vulnerable against perturbations due to lack of robustness.
Currently, most works on attacking GNNs are mainly using gradient information to guide the attack and achieve outstanding performance.
We argue that the main reason is that they have to use the whole graph for attacks, resulting in the increasing time and space complexity as the data scale grows.
We present a practical metric named Degree Assortativity Change (DAC) to measure the impacts of adversarial attacks on graph data.
arXiv Detail & Related papers (2020-09-08T02:17:55Z) - Efficient, Direct, and Restricted Black-Box Graph Evasion Attacks to
Any-Layer Graph Neural Networks via Influence Function [62.89388227354517]
Graph neural network (GNN), the mainstream method to learn on graph data, is vulnerable to graph evasion attacks.
Existing work has at least one of the following drawbacks: 1) limited to directly attack two-layer GNNs; 2) inefficient; and 3) impractical, as they need to know full or part of GNN model parameters.
We propose an influence-based emphefficient, direct, and restricted black-box evasion attack to emphany-layer GNNs.
arXiv Detail & Related papers (2020-09-01T03:24:51Z) - DefenseVGAE: Defending against Adversarial Attacks on Graph Data via a
Variational Graph Autoencoder [22.754141951413786]
Graph neural networks (GNNs) achieve remarkable performance for tasks on graph data.
Recent works show they are extremely vulnerable to adversarial structural perturbations, making their outcomes unreliable.
We propose DefenseVGAE, a novel framework leveraging variational graph autoencoders(VGAEs) to defend GNNs against such attacks.
arXiv Detail & Related papers (2020-06-16T03:30:23Z) - Graph Structure Learning for Robust Graph Neural Networks [63.04935468644495]
Graph Neural Networks (GNNs) are powerful tools in representation learning for graphs.
Recent studies show that GNNs are vulnerable to carefully-crafted perturbations, called adversarial attacks.
We propose a general framework Pro-GNN, which can jointly learn a structural graph and a robust graph neural network model.
arXiv Detail & Related papers (2020-05-20T17:07:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.