Scalable Attack on Graph Data by Injecting Vicious Nodes
- URL: http://arxiv.org/abs/2004.13825v1
- Date: Wed, 22 Apr 2020 02:11:13 GMT
- Title: Scalable Attack on Graph Data by Injecting Vicious Nodes
- Authors: Jihong Wang, Minnan Luo, Fnu Suya, Jundong Li, Zijiang Yang, Qinghua
Zheng
- Abstract summary: Graph convolution networks (GCNs) are vulnerable to carefully designed attacks, which aim to cause misclassification of a specific node on the graph with unnoticeable perturbations.
We develop a more scalable framework named Approximate Fast Gradient Sign Method (AFGSM) which considers a more practical attack scenario.
Our proposed attack method can significantly reduce the classification accuracy of GCNs and is much faster than existing methods without jeopardizing the attack performance.
- Score: 44.56647129718062
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent studies have shown that graph convolution networks (GCNs) are
vulnerable to carefully designed attacks, which aim to cause misclassification
of a specific node on the graph with unnoticeable perturbations. However, a
vast majority of existing works cannot handle large-scale graphs because of
their high time complexity. Additionally, existing works mainly focus on
manipulating existing nodes on the graph, while in practice, attackers usually
do not have the privilege to modify information of existing nodes. In this
paper, we develop a more scalable framework named Approximate Fast Gradient
Sign Method (AFGSM) which considers a more practical attack scenario where
adversaries can only inject new vicious nodes to the graph while having no
control over the original graph. Methodologically, we provide an approximation
strategy to linearize the model we attack and then derive an approximate
closed-from solution with a lower time cost. To have a fair comparison with
existing attack methods that manipulate the original graph, we adapt them to
the new attack scenario by injecting vicious nodes. Empirical experimental
results show that our proposed attack method can significantly reduce the
classification accuracy of GCNs and is much faster than existing methods
without jeopardizing the attack performance.
Related papers
Err
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.