DefenseVGAE: Defending against Adversarial Attacks on Graph Data via a
Variational Graph Autoencoder
- URL: http://arxiv.org/abs/2006.08900v1
- Date: Tue, 16 Jun 2020 03:30:23 GMT
- Title: DefenseVGAE: Defending against Adversarial Attacks on Graph Data via a
Variational Graph Autoencoder
- Authors: Ao Zhang and Jinwen Ma
- Abstract summary: Graph neural networks (GNNs) achieve remarkable performance for tasks on graph data.
Recent works show they are extremely vulnerable to adversarial structural perturbations, making their outcomes unreliable.
We propose DefenseVGAE, a novel framework leveraging variational graph autoencoders(VGAEs) to defend GNNs against such attacks.
- Score: 22.754141951413786
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph neural networks (GNNs) achieve remarkable performance for tasks on
graph data. However, recent works show they are extremely vulnerable to
adversarial structural perturbations, making their outcomes unreliable. In this
paper, we propose DefenseVGAE, a novel framework leveraging variational graph
autoencoders(VGAEs) to defend GNNs against such attacks. DefenseVGAE is trained
to reconstruct graph structure. The reconstructed adjacency matrix can reduce
the effects of adversarial perturbations and boost the performance of GCNs when
facing adversarial attacks. Our experiments on a number of datasets show the
effectiveness of the proposed method under various threat models. Under some
settings it outperforms existing defense strategies. Our code has been made
publicly available at https://github.com/zhangao520/defense-vgae.
Related papers
- Everything Perturbed All at Once: Enabling Differentiable Graph Attacks [61.61327182050706]
Graph neural networks (GNNs) have been shown to be vulnerable to adversarial attacks.
We propose a novel attack method called Differentiable Graph Attack (DGA) to efficiently generate effective attacks.
Compared to the state-of-the-art, DGA achieves nearly equivalent attack performance with 6 times less training time and 11 times smaller GPU memory footprint.
arXiv Detail & Related papers (2023-08-29T20:14:42Z) - Resisting Graph Adversarial Attack via Cooperative Homophilous
Augmentation [60.50994154879244]
Recent studies show that Graph Neural Networks are vulnerable and easily fooled by small perturbations.
In this work, we focus on the emerging but critical attack, namely, Graph Injection Attack.
We propose a general defense framework CHAGNN against GIA through cooperative homophilous augmentation of graph data and model.
arXiv Detail & Related papers (2022-11-15T11:44:31Z) - Bandits for Structure Perturbation-based Black-box Attacks to Graph
Neural Networks with Theoretical Guarantees [60.61846004535707]
Graph neural networks (GNNs) have achieved state-of-the-art performance in many graph-based tasks.
An attacker can mislead GNN models by slightly perturbing the graph structure.
In this paper, we consider black-box attacks to GNNs with structure perturbation as well as with theoretical guarantees.
arXiv Detail & Related papers (2022-05-07T04:17:25Z) - A Hard Label Black-box Adversarial Attack Against Graph Neural Networks [25.081630882605985]
We conduct a systematic study on adversarial attacks against GNNs for graph classification via perturbing the graph structure.
We formulate our attack as an optimization problem, whose objective is to minimize the number of edges to be perturbed in a graph while maintaining the high attack success rate.
Our experimental results on three real-world datasets demonstrate that our attack can effectively attack representative GNNs for graph classification with less queries and perturbations.
arXiv Detail & Related papers (2021-08-21T14:01:34Z) - Black-box Gradient Attack on Graph Neural Networks: Deeper Insights in
Graph-based Attack and Defense [3.3504365823045035]
Graph Neural Networks (GNNs) have received significant attention due to their state-of-the-art performance on various graph representation learning tasks.
Recent studies reveal that GNNs are vulnerable to adversarial attacks, i.e. an attacker is able to fool the GNNs by perturbing the graph structure or node features deliberately.
Most existing attacking algorithms require access to either the model parameters or the training data, which is not practical in the real world.
arXiv Detail & Related papers (2021-04-30T15:30:47Z) - Adversarial Attack on Large Scale Graph [58.741365277995044]
Recent studies have shown that graph neural networks (GNNs) are vulnerable against perturbations due to lack of robustness.
Currently, most works on attacking GNNs are mainly using gradient information to guide the attack and achieve outstanding performance.
We argue that the main reason is that they have to use the whole graph for attacks, resulting in the increasing time and space complexity as the data scale grows.
We present a practical metric named Degree Assortativity Change (DAC) to measure the impacts of adversarial attacks on graph data.
arXiv Detail & Related papers (2020-09-08T02:17:55Z) - Graph Structure Learning for Robust Graph Neural Networks [63.04935468644495]
Graph Neural Networks (GNNs) are powerful tools in representation learning for graphs.
Recent studies show that GNNs are vulnerable to carefully-crafted perturbations, called adversarial attacks.
We propose a general framework Pro-GNN, which can jointly learn a structural graph and a robust graph neural network model.
arXiv Detail & Related papers (2020-05-20T17:07:05Z) - Adversarial Attacks and Defenses on Graphs: A Review, A Tool and
Empirical Studies [73.39668293190019]
Adversary attacks can be easily fooled by small perturbation on the input.
Graph Neural Networks (GNNs) have been demonstrated to inherit this vulnerability.
In this survey, we categorize existing attacks and defenses, and review the corresponding state-of-the-art methods.
arXiv Detail & Related papers (2020-03-02T04:32:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.