Query-based Adversarial Attacks on Graph with Fake Nodes
- URL: http://arxiv.org/abs/2109.13069v1
- Date: Mon, 27 Sep 2021 14:19:17 GMT
- Title: Query-based Adversarial Attacks on Graph with Fake Nodes
- Authors: Zhengyi Wang, Zhongkai Hao, Hang Su, Jun Zhu
- Abstract summary: We propose a novel adversarial attack by introducing a set of fake nodes to the original graph.
Specifically, we query the victim model for each victim node to acquire their most adversarial feature.
Our attack is performed in a practical and unnoticeable manner.
- Score: 32.67989796394633
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While deep neural networks have achieved great success on the graph analysis,
recent works have shown that they are also vulnerable to adversarial attacks
where fraudulent users can fool the model with a limited number of queries.
Compared with adversarial attacks on image classification, performing
adversarial attack on graphs is challenging because of the discrete and
non-differential nature of a graph. To address these issues, we proposed
Cluster Attack, a novel adversarial attack by introducing a set of fake nodes
to the original graph which can mislead the classification on certain victim
nodes. Specifically, we query the victim model for each victim node to acquire
their most adversarial feature, which is related to how the fake node's feature
will affect the victim nodes. We further cluster the victim nodes into several
subgroups according to their most adversarial features such that we can reduce
the searching space. Moreover, our attack is performed in a practical and
unnoticeable manner: (1) We protect the predicted labels of nodes which we are
not aimed for from being changed during attack. (2) We attack by introducing
fake nodes into the original graph without changing existing links and
features. (3) We attack with only partial information about the attacked graph,
i.e., by leveraging the information of victim nodes along with their neighbors
within $k$-hop instead of the whole graph. (4) We perform attack with a limited
number of queries about the predicted scores of the model in a black-box
manner, i.e., without model architecture and parameters. Extensive experiments
demonstrate the effectiveness of our method in terms of the success rate of
attack.
Related papers
- Link Stealing Attacks Against Inductive Graph Neural Networks [60.931106032824275]
A graph neural network (GNN) is a type of neural network that is specifically designed to process graph-structured data.
Previous work has shown that transductive GNNs are vulnerable to a series of privacy attacks.
This paper conducts a comprehensive privacy analysis of inductive GNNs through the lens of link stealing attacks.
arXiv Detail & Related papers (2024-05-09T14:03:52Z) - Towards Reasonable Budget Allocation in Untargeted Graph Structure
Attacks via Gradient Debias [50.628150015907565]
Cross-entropy loss function is used to evaluate perturbation schemes in classification tasks.
Previous methods use negative cross-entropy loss as the attack objective in attacking node-level classification models.
This paper argues about the previous unreasonable attack objective from the perspective of budget allocation.
arXiv Detail & Related papers (2023-03-29T13:02:02Z) - GUAP: Graph Universal Attack Through Adversarial Patching [12.484396767037925]
Graph neural networks (GNNs) are a class of effective deep learning models for node classification tasks.
In this work, we consider an easier attack harder to be noticed, through adversarially patching the graph with new nodes and edges.
We develop an algorithm, named GUAP, that meanwhile achieves a high attack success rate but preserves the prediction accuracy.
arXiv Detail & Related papers (2023-01-04T18:02:29Z) - Resisting Graph Adversarial Attack via Cooperative Homophilous
Augmentation [60.50994154879244]
Recent studies show that Graph Neural Networks are vulnerable and easily fooled by small perturbations.
In this work, we focus on the emerging but critical attack, namely, Graph Injection Attack.
We propose a general defense framework CHAGNN against GIA through cooperative homophilous augmentation of graph data and model.
arXiv Detail & Related papers (2022-11-15T11:44:31Z) - Sparse Vicious Attacks on Graph Neural Networks [3.246307337376473]
This work focuses on a specific, white-box attack to GNN-based link prediction models.
We propose SAVAGE, a novel framework and a method to mount this type of link prediction attacks.
Experiments conducted on real-world and synthetic datasets demonstrate that adversarial attacks implemented through SAVAGE indeed achieve high attack success rate.
arXiv Detail & Related papers (2022-09-20T12:51:24Z) - Model Inversion Attacks against Graph Neural Networks [65.35955643325038]
We study model inversion attacks against Graph Neural Networks (GNNs)
In this paper, we present GraphMI to infer the private training graph data.
Our experimental results show that such defenses are not sufficiently effective and call for more advanced defenses against privacy attacks.
arXiv Detail & Related papers (2022-09-16T09:13:43Z) - What Does the Gradient Tell When Attacking the Graph Structure [44.44204591087092]
We present a theoretical demonstration revealing that attackers tend to increase inter-class edges due to the message passing mechanism of GNNs.
By connecting dissimilar nodes, attackers can more effectively corrupt node features, making such attacks more advantageous.
We propose an innovative attack loss that balances attack effectiveness and imperceptibility, sacrificing some attack effectiveness to attain greater imperceptibility.
arXiv Detail & Related papers (2022-08-26T15:45:20Z) - Graph Structure Learning for Robust Graph Neural Networks [63.04935468644495]
Graph Neural Networks (GNNs) are powerful tools in representation learning for graphs.
Recent studies show that GNNs are vulnerable to carefully-crafted perturbations, called adversarial attacks.
We propose a general framework Pro-GNN, which can jointly learn a structural graph and a robust graph neural network model.
arXiv Detail & Related papers (2020-05-20T17:07:05Z) - Scalable Attack on Graph Data by Injecting Vicious Nodes [44.56647129718062]
Graph convolution networks (GCNs) are vulnerable to carefully designed attacks, which aim to cause misclassification of a specific node on the graph with unnoticeable perturbations.
We develop a more scalable framework named Approximate Fast Gradient Sign Method (AFGSM) which considers a more practical attack scenario.
Our proposed attack method can significantly reduce the classification accuracy of GCNs and is much faster than existing methods without jeopardizing the attack performance.
arXiv Detail & Related papers (2020-04-22T02:11:13Z) - Indirect Adversarial Attacks via Poisoning Neighbors for Graph
Convolutional Networks [0.76146285961466]
Abusing graph convolutions, a node's classification result can be influenced by poisoning its neighbors.
We generate strong adversarial perturbations which are effective on not only one-hop neighbors, but more far from the target.
Our proposed method shows 99% attack success rate within two-hops from the target in two datasets.
arXiv Detail & Related papers (2020-02-19T05:44:09Z) - Adversarial Attack on Community Detection by Hiding Individuals [68.76889102470203]
We focus on black-box attack and aim to hide targeted individuals from the detection of deep graph community detection models.
We propose an iterative learning framework that takes turns to update two modules: one working as the constrained graph generator and the other as the surrogate community detection model.
arXiv Detail & Related papers (2020-01-22T09:50:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.