Adversarial Attack Framework on Graph Embedding Models with Limited
Knowledge
- URL: http://arxiv.org/abs/2105.12419v1
- Date: Wed, 26 May 2021 09:18:58 GMT
- Title: Adversarial Attack Framework on Graph Embedding Models with Limited
Knowledge
- Authors: Heng Chang, Yu Rong, Tingyang Xu, Wenbing Huang, Honglei Zhang, Peng
Cui, Xin Wang, Wenwu Zhu, Junzhou Huang
- Abstract summary: Existing works usually perform the attack in a white-box fashion.
We demand to attack various kinds of graph embedding models with black-box driven.
We prove that GF-Attack can perform an effective attack without knowing the number of layers of graph embedding models.
- Score: 126.32842151537217
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the success of the graph embedding model in both academic and industry
areas, the robustness of graph embedding against adversarial attack inevitably
becomes a crucial problem in graph learning. Existing works usually perform the
attack in a white-box fashion: they need to access the predictions/labels to
construct their adversarial loss. However, the inaccessibility of
predictions/labels makes the white-box attack impractical to a real graph
learning system. This paper promotes current frameworks in a more general and
flexible sense -- we demand to attack various kinds of graph embedding models
with black-box driven. We investigate the theoretical connections between graph
signal processing and graph embedding models and formulate the graph embedding
model as a general graph signal process with a corresponding graph filter.
Therefore, we design a generalized adversarial attacker: GF-Attack. Without
accessing any labels and model predictions, GF-Attack can perform the attack
directly on the graph filter in a black-box fashion. We further prove that
GF-Attack can perform an effective attack without knowing the number of layers
of graph embedding models. To validate the generalization of GF-Attack, we
construct the attacker on four popular graph embedding models. Extensive
experiments validate the effectiveness of GF-Attack on several benchmark
datasets.
Related papers
- Deceptive Fairness Attacks on Graphs via Meta Learning [102.53029537886314]
We study deceptive fairness attacks on graphs to answer the question: How can we achieve poisoning attacks on a graph learning model to exacerbate the bias deceptively?
We propose a meta learning-based framework named FATE to attack various fairness definitions and graph learning models.
We conduct extensive experimental evaluations on real-world datasets in the task of semi-supervised node classification.
arXiv Detail & Related papers (2023-10-24T09:10:14Z) - Model Inversion Attacks against Graph Neural Networks [65.35955643325038]
We study model inversion attacks against Graph Neural Networks (GNNs)
In this paper, we present GraphMI to infer the private training graph data.
Our experimental results show that such defenses are not sufficiently effective and call for more advanced defenses against privacy attacks.
arXiv Detail & Related papers (2022-09-16T09:13:43Z) - Unsupervised Graph Poisoning Attack via Contrastive Loss
Back-propagation [18.671374133506838]
We propose a novel unsupervised gradient-based adversarial attack that does not rely on labels for graph contrastive learning.
Our attack outperforms unsupervised baseline attacks and has comparable performance with supervised attacks in multiple downstream tasks.
arXiv Detail & Related papers (2022-01-20T03:32:21Z) - GraphMI: Extracting Private Graph Data from Graph Neural Networks [59.05178231559796]
We present textbfGraph textbfModel textbfInversion attack (GraphMI), which aims to extract private graph data of the training graph by inverting GNN.
Specifically, we propose a projected gradient module to tackle the discreteness of graph edges while preserving the sparsity and smoothness of graph features.
We design a graph auto-encoder module to efficiently exploit graph topology, node attributes, and target model parameters for edge inference.
arXiv Detail & Related papers (2021-06-05T07:07:52Z) - Graphfool: Targeted Label Adversarial Attack on Graph Embedding [11.866894644607894]
We propose Graphfool, a novel targeted label adversarial attack on graph embedding.
It can generate adversarial graph to attack graph embedding methods via classifying boundary and gradient information.
Experiments on real-world graph networks demonstrate that Graphfool can derive better performance than state-of-art techniques.
arXiv Detail & Related papers (2021-02-24T13:45:38Z) - GraphAttacker: A General Multi-Task GraphAttack Framework [4.218118583619758]
Graph Neural Networks (GNNs) have been successfully exploited in graph analysis tasks in many real-world applications.
adversarial samples generated by attackers, which achieved great attack performance with almost imperceptible perturbations.
We propose GraphAttacker, a novel generic graph attack framework that can flexibly adjust the structures and the attack strategies according to the graph analysis tasks.
arXiv Detail & Related papers (2021-01-18T03:06:41Z) - Query-free Black-box Adversarial Attacks on Graphs [37.88689315688314]
We propose a query-free black-box adversarial attack on graphs, in which the attacker has no knowledge of the target model and no query access to the model.
We prove that the impact of the flipped links on the target model can be quantified by spectral changes, and thus be approximated using the eigenvalue perturbation theory.
Due to its simplicity and scalability, the proposed model is not only generic in various graph-based models, but can be easily extended when different knowledge levels are accessible as well.
arXiv Detail & Related papers (2020-12-12T08:52:56Z) - Reinforcement Learning-based Black-Box Evasion Attacks to Link
Prediction in Dynamic Graphs [87.5882042724041]
Link prediction in dynamic graphs (LPDG) is an important research problem that has diverse applications.
We study the vulnerability of LPDG methods and propose the first practical black-box evasion attack.
arXiv Detail & Related papers (2020-09-01T01:04:49Z) - Graph Structure Learning for Robust Graph Neural Networks [63.04935468644495]
Graph Neural Networks (GNNs) are powerful tools in representation learning for graphs.
Recent studies show that GNNs are vulnerable to carefully-crafted perturbations, called adversarial attacks.
We propose a general framework Pro-GNN, which can jointly learn a structural graph and a robust graph neural network model.
arXiv Detail & Related papers (2020-05-20T17:07:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.