Adversarial Attacks and Defenses on Graphs: A Review, A Tool and
Empirical Studies
- URL: http://arxiv.org/abs/2003.00653v3
- Date: Sat, 12 Dec 2020 17:21:00 GMT
- Title: Adversarial Attacks and Defenses on Graphs: A Review, A Tool and
Empirical Studies
- Authors: Wei Jin, Yaxin Li, Han Xu, Yiqi Wang, Shuiwang Ji, Charu Aggarwal and
Jiliang Tang
- Abstract summary: Adversary attacks can be easily fooled by small perturbation on the input.
Graph Neural Networks (GNNs) have been demonstrated to inherit this vulnerability.
In this survey, we categorize existing attacks and defenses, and review the corresponding state-of-the-art methods.
- Score: 73.39668293190019
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep neural networks (DNNs) have achieved significant performance in various
tasks. However, recent studies have shown that DNNs can be easily fooled by
small perturbation on the input, called adversarial attacks. As the extensions
of DNNs to graphs, Graph Neural Networks (GNNs) have been demonstrated to
inherit this vulnerability. Adversary can mislead GNNs to give wrong
predictions by modifying the graph structure such as manipulating a few edges.
This vulnerability has arisen tremendous concerns for adapting GNNs in
safety-critical applications and has attracted increasing research attention in
recent years. Thus, it is necessary and timely to provide a comprehensive
overview of existing graph adversarial attacks and the countermeasures. In this
survey, we categorize existing attacks and defenses, and review the
corresponding state-of-the-art methods. Furthermore, we have developed a
repository with representative algorithms
(https://github.com/DSE-MSU/DeepRobust/tree/master/deeprobust/graph). The
repository enables us to conduct empirical studies to deepen our understandings
on attacks and defenses on graphs.
Related papers
- GraphMU: Repairing Robustness of Graph Neural Networks via Machine Unlearning [8.435319580412472]
Graph Neural Networks (GNNs) are vulnerable to adversarial attacks.
In this paper, we introduce the novel concept of model repair for GNNs.
We propose a repair framework, Repairing Robustness of Graph Neural Networks via Machine Unlearning (GraphMU)
arXiv Detail & Related papers (2024-06-19T12:41:15Z) - Link Stealing Attacks Against Inductive Graph Neural Networks [60.931106032824275]
A graph neural network (GNN) is a type of neural network that is specifically designed to process graph-structured data.
Previous work has shown that transductive GNNs are vulnerable to a series of privacy attacks.
This paper conducts a comprehensive privacy analysis of inductive GNNs through the lens of link stealing attacks.
arXiv Detail & Related papers (2024-05-09T14:03:52Z) - A Comprehensive Survey on Trustworthy Graph Neural Networks: Privacy,
Robustness, Fairness, and Explainability [59.80140875337769]
Graph Neural Networks (GNNs) have made rapid developments in the recent years.
GNNs can leak private information, are vulnerable to adversarial attacks, can inherit and magnify societal bias from training data.
This paper gives a comprehensive survey of GNNs in the computational aspects of privacy, robustness, fairness, and explainability.
arXiv Detail & Related papers (2022-04-18T21:41:07Z) - Toward the Analysis of Graph Neural Networks [1.0412114420493723]
Graph Neural Networks (GNNs) have emerged as a robust framework for graph-structured data analysis.
This paper proposes an approach to analyze GNNs by converting them into Feed Forward Neural Networks (FFNNs) and reusing existing FFNNs analyses.
arXiv Detail & Related papers (2022-01-01T04:59:49Z) - Jointly Attacking Graph Neural Network and its Explanations [50.231829335996814]
Graph Neural Networks (GNNs) have boosted the performance for many graph-related tasks.
Recent studies have shown that GNNs are highly vulnerable to adversarial attacks, where adversaries can mislead the GNNs' prediction by modifying graphs.
We propose a novel attack framework (GEAttack) which can attack both a GNN model and its explanations by simultaneously exploiting their vulnerabilities.
arXiv Detail & Related papers (2021-08-07T07:44:33Z) - Structack: Structure-based Adversarial Attacks on Graph Neural Networks [1.795391652194214]
We study adversarial attacks that are uninformed, where an attacker only has access to the graph structure, but no information about node attributes.
We show that structure-based uninformed attacks can approach the performance of informed attacks, while being computationally more efficient.
We present a new attack strategy on GNNs that we refer to as Structack. Structack can successfully manipulate the performance of GNNs with very limited information while operating under tight computational constraints.
arXiv Detail & Related papers (2021-07-23T16:17:10Z) - Graph Backdoor [53.70971502299977]
We present GTA, the first backdoor attack on graph neural networks (GNNs)
GTA departs in significant ways: it defines triggers as specific subgraphs, including both topological structures and descriptive features.
It can be instantiated for both transductive (e.g., node classification) and inductive (e.g., graph classification) tasks.
arXiv Detail & Related papers (2020-06-21T19:45:30Z) - Graph Structure Learning for Robust Graph Neural Networks [63.04935468644495]
Graph Neural Networks (GNNs) are powerful tools in representation learning for graphs.
Recent studies show that GNNs are vulnerable to carefully-crafted perturbations, called adversarial attacks.
We propose a general framework Pro-GNN, which can jointly learn a structural graph and a robust graph neural network model.
arXiv Detail & Related papers (2020-05-20T17:07:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.