Graph Structure Learning for Robust Graph Neural Networks
- URL: http://arxiv.org/abs/2005.10203v3
- Date: Sat, 27 Jun 2020 21:57:09 GMT
- Title: Graph Structure Learning for Robust Graph Neural Networks
- Authors: Wei Jin, Yao Ma, Xiaorui Liu, Xianfeng Tang, Suhang Wang, Jiliang Tang
- Abstract summary: Graph Neural Networks (GNNs) are powerful tools in representation learning for graphs.
Recent studies show that GNNs are vulnerable to carefully-crafted perturbations, called adversarial attacks.
We propose a general framework Pro-GNN, which can jointly learn a structural graph and a robust graph neural network model.
- Score: 63.04935468644495
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph Neural Networks (GNNs) are powerful tools in representation learning
for graphs. However, recent studies show that GNNs are vulnerable to
carefully-crafted perturbations, called adversarial attacks. Adversarial
attacks can easily fool GNNs in making predictions for downstream tasks. The
vulnerability to adversarial attacks has raised increasing concerns for
applying GNNs in safety-critical applications. Therefore, developing robust
algorithms to defend adversarial attacks is of great significance. A natural
idea to defend adversarial attacks is to clean the perturbed graph. It is
evident that real-world graphs share some intrinsic properties. For example,
many real-world graphs are low-rank and sparse, and the features of two
adjacent nodes tend to be similar. In fact, we find that adversarial attacks
are likely to violate these graph properties. Therefore, in this paper, we
explore these properties to defend adversarial attacks on graphs. In
particular, we propose a general framework Pro-GNN, which can jointly learn a
structural graph and a robust graph neural network model from the perturbed
graph guided by these properties. Extensive experiments on real-world graphs
demonstrate that the proposed framework achieves significantly better
performance compared with the state-of-the-art defense methods, even when the
graph is heavily perturbed. We release the implementation of Pro-GNN to our
DeepRobust repository for adversarial attacks and defenses (footnote:
https://github.com/DSE-MSU/DeepRobust). The specific experimental settings to
reproduce our results can be found in https://github.com/ChandlerBang/Pro-GNN.
Related papers
- Resisting Graph Adversarial Attack via Cooperative Homophilous
Augmentation [60.50994154879244]
Recent studies show that Graph Neural Networks are vulnerable and easily fooled by small perturbations.
In this work, we focus on the emerging but critical attack, namely, Graph Injection Attack.
We propose a general defense framework CHAGNN against GIA through cooperative homophilous augmentation of graph data and model.
arXiv Detail & Related papers (2022-11-15T11:44:31Z) - Bandits for Structure Perturbation-based Black-box Attacks to Graph
Neural Networks with Theoretical Guarantees [60.61846004535707]
Graph neural networks (GNNs) have achieved state-of-the-art performance in many graph-based tasks.
An attacker can mislead GNN models by slightly perturbing the graph structure.
In this paper, we consider black-box attacks to GNNs with structure perturbation as well as with theoretical guarantees.
arXiv Detail & Related papers (2022-05-07T04:17:25Z) - Exploring High-Order Structure for Robust Graph Structure Learning [33.62223306095631]
Graph Neural Networks (GNNs) are vulnerable to adversarial attack, i.e., an imperceptible structure perturbation can fool GNNs to make wrong predictions.
In this paper, we analyze the adversarial attack on graphs from the perspective of feature smoothness.
We propose a novel algorithm that incorporates the high-order structural information into the graph structure learning.
arXiv Detail & Related papers (2022-03-22T07:03:08Z) - A Hard Label Black-box Adversarial Attack Against Graph Neural Networks [25.081630882605985]
We conduct a systematic study on adversarial attacks against GNNs for graph classification via perturbing the graph structure.
We formulate our attack as an optimization problem, whose objective is to minimize the number of edges to be perturbed in a graph while maintaining the high attack success rate.
Our experimental results on three real-world datasets demonstrate that our attack can effectively attack representative GNNs for graph classification with less queries and perturbations.
arXiv Detail & Related papers (2021-08-21T14:01:34Z) - Jointly Attacking Graph Neural Network and its Explanations [50.231829335996814]
Graph Neural Networks (GNNs) have boosted the performance for many graph-related tasks.
Recent studies have shown that GNNs are highly vulnerable to adversarial attacks, where adversaries can mislead the GNNs' prediction by modifying graphs.
We propose a novel attack framework (GEAttack) which can attack both a GNN model and its explanations by simultaneously exploiting their vulnerabilities.
arXiv Detail & Related papers (2021-08-07T07:44:33Z) - Adversarial Attack on Large Scale Graph [58.741365277995044]
Recent studies have shown that graph neural networks (GNNs) are vulnerable against perturbations due to lack of robustness.
Currently, most works on attacking GNNs are mainly using gradient information to guide the attack and achieve outstanding performance.
We argue that the main reason is that they have to use the whole graph for attacks, resulting in the increasing time and space complexity as the data scale grows.
We present a practical metric named Degree Assortativity Change (DAC) to measure the impacts of adversarial attacks on graph data.
arXiv Detail & Related papers (2020-09-08T02:17:55Z) - Adversarial Attacks and Defenses on Graphs: A Review, A Tool and
Empirical Studies [73.39668293190019]
Adversary attacks can be easily fooled by small perturbation on the input.
Graph Neural Networks (GNNs) have been demonstrated to inherit this vulnerability.
In this survey, we categorize existing attacks and defenses, and review the corresponding state-of-the-art methods.
arXiv Detail & Related papers (2020-03-02T04:32:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.