Adversarial Attack on Large Scale Graph
- URL: http://arxiv.org/abs/2009.03488v2
- Date: Thu, 6 May 2021 14:15:27 GMT
- Title: Adversarial Attack on Large Scale Graph
- Authors: Jintang Li, Tao Xie, Liang Chen, Fenfang Xie, Xiangnan He, Zibin Zheng
- Abstract summary: Recent studies have shown that graph neural networks (GNNs) are vulnerable against perturbations due to lack of robustness.
Currently, most works on attacking GNNs are mainly using gradient information to guide the attack and achieve outstanding performance.
We argue that the main reason is that they have to use the whole graph for attacks, resulting in the increasing time and space complexity as the data scale grows.
We present a practical metric named Degree Assortativity Change (DAC) to measure the impacts of adversarial attacks on graph data.
- Score: 58.741365277995044
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent studies have shown that graph neural networks (GNNs) are vulnerable
against perturbations due to lack of robustness and can therefore be easily
fooled. Currently, most works on attacking GNNs are mainly using gradient
information to guide the attack and achieve outstanding performance. However,
the high complexity of time and space makes them unmanageable for large scale
graphs and becomes the major bottleneck that prevents the practical usage. We
argue that the main reason is that they have to use the whole graph for
attacks, resulting in the increasing time and space complexity as the data
scale grows. In this work, we propose an efficient Simplified Gradient-based
Attack (SGA) method to bridge this gap. SGA can cause the GNNs to misclassify
specific target nodes through a multi-stage attack framework, which needs only
a much smaller subgraph. In addition, we present a practical metric named
Degree Assortativity Change (DAC) to measure the impacts of adversarial attacks
on graph data. We evaluate our attack method on four real-world graph networks
by attacking several commonly used GNNs. The experimental results demonstrate
that SGA can achieve significant time and memory efficiency improvements while
maintaining competitive attack performance compared to state-of-art attack
techniques. Codes are available via: https://github.com/EdisonLeeeee/SGAttack.
Related papers
- Everything Perturbed All at Once: Enabling Differentiable Graph Attacks [61.61327182050706]
Graph neural networks (GNNs) have been shown to be vulnerable to adversarial attacks.
We propose a novel attack method called Differentiable Graph Attack (DGA) to efficiently generate effective attacks.
Compared to the state-of-the-art, DGA achieves nearly equivalent attack performance with 6 times less training time and 11 times smaller GPU memory footprint.
arXiv Detail & Related papers (2023-08-29T20:14:42Z) - GUAP: Graph Universal Attack Through Adversarial Patching [12.484396767037925]
Graph neural networks (GNNs) are a class of effective deep learning models for node classification tasks.
In this work, we consider an easier attack harder to be noticed, through adversarially patching the graph with new nodes and edges.
We develop an algorithm, named GUAP, that meanwhile achieves a high attack success rate but preserves the prediction accuracy.
arXiv Detail & Related papers (2023-01-04T18:02:29Z) - Resisting Graph Adversarial Attack via Cooperative Homophilous
Augmentation [60.50994154879244]
Recent studies show that Graph Neural Networks are vulnerable and easily fooled by small perturbations.
In this work, we focus on the emerging but critical attack, namely, Graph Injection Attack.
We propose a general defense framework CHAGNN against GIA through cooperative homophilous augmentation of graph data and model.
arXiv Detail & Related papers (2022-11-15T11:44:31Z) - Robustness of Graph Neural Networks at Scale [63.45769413975601]
We study how to attack and defend Graph Neural Networks (GNNs) at scale.
We propose two sparsity-aware first-order optimization attacks that maintain an efficient representation.
We show that common surrogate losses are not well-suited for global attacks on GNNs.
arXiv Detail & Related papers (2021-10-26T21:31:17Z) - A Hard Label Black-box Adversarial Attack Against Graph Neural Networks [25.081630882605985]
We conduct a systematic study on adversarial attacks against GNNs for graph classification via perturbing the graph structure.
We formulate our attack as an optimization problem, whose objective is to minimize the number of edges to be perturbed in a graph while maintaining the high attack success rate.
Our experimental results on three real-world datasets demonstrate that our attack can effectively attack representative GNNs for graph classification with less queries and perturbations.
arXiv Detail & Related papers (2021-08-21T14:01:34Z) - Efficient, Direct, and Restricted Black-Box Graph Evasion Attacks to
Any-Layer Graph Neural Networks via Influence Function [62.89388227354517]
Graph neural network (GNN), the mainstream method to learn on graph data, is vulnerable to graph evasion attacks.
Existing work has at least one of the following drawbacks: 1) limited to directly attack two-layer GNNs; 2) inefficient; and 3) impractical, as they need to know full or part of GNN model parameters.
We propose an influence-based emphefficient, direct, and restricted black-box evasion attack to emphany-layer GNNs.
arXiv Detail & Related papers (2020-09-01T03:24:51Z) - Graph Structure Learning for Robust Graph Neural Networks [63.04935468644495]
Graph Neural Networks (GNNs) are powerful tools in representation learning for graphs.
Recent studies show that GNNs are vulnerable to carefully-crafted perturbations, called adversarial attacks.
We propose a general framework Pro-GNN, which can jointly learn a structural graph and a robust graph neural network model.
arXiv Detail & Related papers (2020-05-20T17:07:05Z) - Scalable Attack on Graph Data by Injecting Vicious Nodes [44.56647129718062]
Graph convolution networks (GCNs) are vulnerable to carefully designed attacks, which aim to cause misclassification of a specific node on the graph with unnoticeable perturbations.
We develop a more scalable framework named Approximate Fast Gradient Sign Method (AFGSM) which considers a more practical attack scenario.
Our proposed attack method can significantly reduce the classification accuracy of GCNs and is much faster than existing methods without jeopardizing the attack performance.
arXiv Detail & Related papers (2020-04-22T02:11:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.