A Hard Label Black-box Adversarial Attack Against Graph Neural Networks
- URL: http://arxiv.org/abs/2108.09513v1
- Date: Sat, 21 Aug 2021 14:01:34 GMT
- Title: A Hard Label Black-box Adversarial Attack Against Graph Neural Networks
- Authors: Jiaming Mu, Binghui Wang, Qi Li, Kun Sun, Mingwei Xu, Zhuotao Liu
- Abstract summary: We conduct a systematic study on adversarial attacks against GNNs for graph classification via perturbing the graph structure.
We formulate our attack as an optimization problem, whose objective is to minimize the number of edges to be perturbed in a graph while maintaining the high attack success rate.
Our experimental results on three real-world datasets demonstrate that our attack can effectively attack representative GNNs for graph classification with less queries and perturbations.
- Score: 25.081630882605985
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Graph Neural Networks (GNNs) have achieved state-of-the-art performance in
various graph structure related tasks such as node classification and graph
classification. However, GNNs are vulnerable to adversarial attacks. Existing
works mainly focus on attacking GNNs for node classification; nevertheless, the
attacks against GNNs for graph classification have not been well explored.
In this work, we conduct a systematic study on adversarial attacks against
GNNs for graph classification via perturbing the graph structure. In
particular, we focus on the most challenging attack, i.e., hard label black-box
attack, where an attacker has no knowledge about the target GNN model and can
only obtain predicted labels through querying the target model.To achieve this
goal, we formulate our attack as an optimization problem, whose objective is to
minimize the number of edges to be perturbed in a graph while maintaining the
high attack success rate. The original optimization problem is intractable to
solve, and we relax the optimization problem to be a tractable one, which is
solved with theoretical convergence guarantee. We also design a coarse-grained
searching algorithm and a query-efficient gradient computation algorithm to
decrease the number of queries to the target GNN model. Our experimental
results on three real-world datasets demonstrate that our attack can
effectively attack representative GNNs for graph classification with less
queries and perturbations. We also evaluate the effectiveness of our attack
under two defenses: one is well-designed adversarial graph detector and the
other is that the target GNN model itself is equipped with a defense to prevent
adversarial graph generation. Our experimental results show that such defenses
are not effective enough, which highlights more advanced defenses.
Related papers
- Model Inversion Attacks against Graph Neural Networks [65.35955643325038]
We study model inversion attacks against Graph Neural Networks (GNNs)
In this paper, we present GraphMI to infer the private training graph data.
Our experimental results show that such defenses are not sufficiently effective and call for more advanced defenses against privacy attacks.
arXiv Detail & Related papers (2022-09-16T09:13:43Z) - Bandits for Structure Perturbation-based Black-box Attacks to Graph
Neural Networks with Theoretical Guarantees [60.61846004535707]
Graph neural networks (GNNs) have achieved state-of-the-art performance in many graph-based tasks.
An attacker can mislead GNN models by slightly perturbing the graph structure.
In this paper, we consider black-box attacks to GNNs with structure perturbation as well as with theoretical guarantees.
arXiv Detail & Related papers (2022-05-07T04:17:25Z) - Robustness of Graph Neural Networks at Scale [63.45769413975601]
We study how to attack and defend Graph Neural Networks (GNNs) at scale.
We propose two sparsity-aware first-order optimization attacks that maintain an efficient representation.
We show that common surrogate losses are not well-suited for global attacks on GNNs.
arXiv Detail & Related papers (2021-10-26T21:31:17Z) - Jointly Attacking Graph Neural Network and its Explanations [50.231829335996814]
Graph Neural Networks (GNNs) have boosted the performance for many graph-related tasks.
Recent studies have shown that GNNs are highly vulnerable to adversarial attacks, where adversaries can mislead the GNNs' prediction by modifying graphs.
We propose a novel attack framework (GEAttack) which can attack both a GNN model and its explanations by simultaneously exploiting their vulnerabilities.
arXiv Detail & Related papers (2021-08-07T07:44:33Z) - Black-box Gradient Attack on Graph Neural Networks: Deeper Insights in
Graph-based Attack and Defense [3.3504365823045035]
Graph Neural Networks (GNNs) have received significant attention due to their state-of-the-art performance on various graph representation learning tasks.
Recent studies reveal that GNNs are vulnerable to adversarial attacks, i.e. an attacker is able to fool the GNNs by perturbing the graph structure or node features deliberately.
Most existing attacking algorithms require access to either the model parameters or the training data, which is not practical in the real world.
arXiv Detail & Related papers (2021-04-30T15:30:47Z) - Adversarial Attack on Large Scale Graph [58.741365277995044]
Recent studies have shown that graph neural networks (GNNs) are vulnerable against perturbations due to lack of robustness.
Currently, most works on attacking GNNs are mainly using gradient information to guide the attack and achieve outstanding performance.
We argue that the main reason is that they have to use the whole graph for attacks, resulting in the increasing time and space complexity as the data scale grows.
We present a practical metric named Degree Assortativity Change (DAC) to measure the impacts of adversarial attacks on graph data.
arXiv Detail & Related papers (2020-09-08T02:17:55Z) - Adversarial Attack on Hierarchical Graph Pooling Neural Networks [14.72310134429243]
We study the robustness of graph neural networks (GNNs) for graph classification tasks.
In this paper, we propose an adversarial attack framework for the graph classification task.
To the best of our knowledge, this is the first work on the adversarial attack against hierarchical GNN-based graph classification models.
arXiv Detail & Related papers (2020-05-23T16:19:47Z) - Graph Structure Learning for Robust Graph Neural Networks [63.04935468644495]
Graph Neural Networks (GNNs) are powerful tools in representation learning for graphs.
Recent studies show that GNNs are vulnerable to carefully-crafted perturbations, called adversarial attacks.
We propose a general framework Pro-GNN, which can jointly learn a structural graph and a robust graph neural network model.
arXiv Detail & Related papers (2020-05-20T17:07:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.