Scalable Attack on Graph Data by Injecting Vicious Nodes
- URL: http://arxiv.org/abs/2004.13825v1
- Date: Wed, 22 Apr 2020 02:11:13 GMT
- Title: Scalable Attack on Graph Data by Injecting Vicious Nodes
- Authors: Jihong Wang, Minnan Luo, Fnu Suya, Jundong Li, Zijiang Yang, Qinghua
Zheng
- Abstract summary: Graph convolution networks (GCNs) are vulnerable to carefully designed attacks, which aim to cause misclassification of a specific node on the graph with unnoticeable perturbations.
We develop a more scalable framework named Approximate Fast Gradient Sign Method (AFGSM) which considers a more practical attack scenario.
Our proposed attack method can significantly reduce the classification accuracy of GCNs and is much faster than existing methods without jeopardizing the attack performance.
- Score: 44.56647129718062
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent studies have shown that graph convolution networks (GCNs) are
vulnerable to carefully designed attacks, which aim to cause misclassification
of a specific node on the graph with unnoticeable perturbations. However, a
vast majority of existing works cannot handle large-scale graphs because of
their high time complexity. Additionally, existing works mainly focus on
manipulating existing nodes on the graph, while in practice, attackers usually
do not have the privilege to modify information of existing nodes. In this
paper, we develop a more scalable framework named Approximate Fast Gradient
Sign Method (AFGSM) which considers a more practical attack scenario where
adversaries can only inject new vicious nodes to the graph while having no
control over the original graph. Methodologically, we provide an approximation
strategy to linearize the model we attack and then derive an approximate
closed-from solution with a lower time cost. To have a fair comparison with
existing attack methods that manipulate the original graph, we adapt them to
the new attack scenario by injecting vicious nodes. Empirical experimental
results show that our proposed attack method can significantly reduce the
classification accuracy of GCNs and is much faster than existing methods
without jeopardizing the attack performance.
Related papers
- Everything Perturbed All at Once: Enabling Differentiable Graph Attacks [61.61327182050706]
Graph neural networks (GNNs) have been shown to be vulnerable to adversarial attacks.
We propose a novel attack method called Differentiable Graph Attack (DGA) to efficiently generate effective attacks.
Compared to the state-of-the-art, DGA achieves nearly equivalent attack performance with 6 times less training time and 11 times smaller GPU memory footprint.
arXiv Detail & Related papers (2023-08-29T20:14:42Z) - GUAP: Graph Universal Attack Through Adversarial Patching [12.484396767037925]
Graph neural networks (GNNs) are a class of effective deep learning models for node classification tasks.
In this work, we consider an easier attack harder to be noticed, through adversarially patching the graph with new nodes and edges.
We develop an algorithm, named GUAP, that meanwhile achieves a high attack success rate but preserves the prediction accuracy.
arXiv Detail & Related papers (2023-01-04T18:02:29Z) - Resisting Graph Adversarial Attack via Cooperative Homophilous
Augmentation [60.50994154879244]
Recent studies show that Graph Neural Networks are vulnerable and easily fooled by small perturbations.
In this work, we focus on the emerging but critical attack, namely, Graph Injection Attack.
We propose a general defense framework CHAGNN against GIA through cooperative homophilous augmentation of graph data and model.
arXiv Detail & Related papers (2022-11-15T11:44:31Z) - Model Inversion Attacks against Graph Neural Networks [65.35955643325038]
We study model inversion attacks against Graph Neural Networks (GNNs)
In this paper, we present GraphMI to infer the private training graph data.
Our experimental results show that such defenses are not sufficiently effective and call for more advanced defenses against privacy attacks.
arXiv Detail & Related papers (2022-09-16T09:13:43Z) - Query-based Adversarial Attacks on Graph with Fake Nodes [32.67989796394633]
We propose a novel adversarial attack by introducing a set of fake nodes to the original graph.
Specifically, we query the victim model for each victim node to acquire their most adversarial feature.
Our attack is performed in a practical and unnoticeable manner.
arXiv Detail & Related papers (2021-09-27T14:19:17Z) - Adversarial Attack on Large Scale Graph [58.741365277995044]
Recent studies have shown that graph neural networks (GNNs) are vulnerable against perturbations due to lack of robustness.
Currently, most works on attacking GNNs are mainly using gradient information to guide the attack and achieve outstanding performance.
We argue that the main reason is that they have to use the whole graph for attacks, resulting in the increasing time and space complexity as the data scale grows.
We present a practical metric named Degree Assortativity Change (DAC) to measure the impacts of adversarial attacks on graph data.
arXiv Detail & Related papers (2020-09-08T02:17:55Z) - Graph Structure Learning for Robust Graph Neural Networks [63.04935468644495]
Graph Neural Networks (GNNs) are powerful tools in representation learning for graphs.
Recent studies show that GNNs are vulnerable to carefully-crafted perturbations, called adversarial attacks.
We propose a general framework Pro-GNN, which can jointly learn a structural graph and a robust graph neural network model.
arXiv Detail & Related papers (2020-05-20T17:07:05Z) - Adversarial Attacks on Graph Neural Networks via Meta Learning [4.139895092509202]
We investigate training time attacks on graph neural networks for node classification perturbing the discrete graph structure.
Our core principle is to use meta-gradients to solve the bilevel problem underlying training-time attacks.
arXiv Detail & Related papers (2019-02-22T09:20:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.