Graphfool: Targeted Label Adversarial Attack on Graph Embedding
- URL: http://arxiv.org/abs/2102.12284v1
- Date: Wed, 24 Feb 2021 13:45:38 GMT
- Title: Graphfool: Targeted Label Adversarial Attack on Graph Embedding
- Authors: Jinyin Chen, Xiang Lin, Dunjie Zhang, Wenrong Jiang, Guohan Huang, Hui
Xiong, and Yun Xiang
- Abstract summary: We propose Graphfool, a novel targeted label adversarial attack on graph embedding.
It can generate adversarial graph to attack graph embedding methods via classifying boundary and gradient information.
Experiments on real-world graph networks demonstrate that Graphfool can derive better performance than state-of-art techniques.
- Score: 11.866894644607894
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning is effective in graph analysis. It is widely applied in many
related areas, such as link prediction, node classification, community
detection, and graph classification etc. Graph embedding, which learns
low-dimensional representations for vertices or edges in the graph, usually
employs deep models to derive the embedding vector. However, these models are
vulnerable. We envision that graph embedding methods based on deep models can
be easily attacked using adversarial examples. Thus, in this paper, we propose
Graphfool, a novel targeted label adversarial attack on graph embedding. It can
generate adversarial graph to attack graph embedding methods via classifying
boundary and gradient information in graph convolutional network (GCN).
Specifically, we perform the following steps: 1),We first estimate the
classification boundaries of different classes. 2), We calculate the minimal
perturbation matrix to misclassify the attacked vertex according to the target
classification boundary. 3), We modify the adjacency matrix according to the
maximal absolute value of the disturbance matrix. This process is implemented
iteratively. To the best of our knowledge, this is the first targeted label
attack technique. The experiments on real-world graph networks demonstrate that
Graphfool can derive better performance than state-of-art techniques. Compared
with the second best algorithm, Graphfool can achieve an average improvement of
11.44% in attack success rate.
Related papers
- Model Inversion Attacks against Graph Neural Networks [65.35955643325038]
We study model inversion attacks against Graph Neural Networks (GNNs)
In this paper, we present GraphMI to infer the private training graph data.
Our experimental results show that such defenses are not sufficiently effective and call for more advanced defenses against privacy attacks.
arXiv Detail & Related papers (2022-09-16T09:13:43Z) - Semi-Supervised Hierarchical Graph Classification [54.25165160435073]
We study the node classification problem in the hierarchical graph where a 'node' is a graph instance.
We propose the Hierarchical Graph Mutual Information (HGMI) and present a way to compute HGMI with theoretical guarantee.
We demonstrate the effectiveness of this hierarchical graph modeling and the proposed SEAL-CI method on text and social network data.
arXiv Detail & Related papers (2022-06-11T04:05:29Z) - Unsupervised Graph Poisoning Attack via Contrastive Loss
Back-propagation [18.671374133506838]
We propose a novel unsupervised gradient-based adversarial attack that does not rely on labels for graph contrastive learning.
Our attack outperforms unsupervised baseline attacks and has comparable performance with supervised attacks in multiple downstream tasks.
arXiv Detail & Related papers (2022-01-20T03:32:21Z) - Inference Attacks Against Graph Neural Networks [33.19531086886817]
Graph embedding is a powerful tool to solve the graph analytics problem.
While sharing graph embedding is intriguing, the associated privacy risks are unexplored.
We systematically investigate the information leakage of the graph embedding by mounting three inference attacks.
arXiv Detail & Related papers (2021-10-06T10:08:11Z) - GraphMI: Extracting Private Graph Data from Graph Neural Networks [59.05178231559796]
We present textbfGraph textbfModel textbfInversion attack (GraphMI), which aims to extract private graph data of the training graph by inverting GNN.
Specifically, we propose a projected gradient module to tackle the discreteness of graph edges while preserving the sparsity and smoothness of graph features.
We design a graph auto-encoder module to efficiently exploit graph topology, node attributes, and target model parameters for edge inference.
arXiv Detail & Related papers (2021-06-05T07:07:52Z) - Adversarial Attack Framework on Graph Embedding Models with Limited
Knowledge [126.32842151537217]
Existing works usually perform the attack in a white-box fashion.
We demand to attack various kinds of graph embedding models with black-box driven.
We prove that GF-Attack can perform an effective attack without knowing the number of layers of graph embedding models.
arXiv Detail & Related papers (2021-05-26T09:18:58Z) - Line Graph Neural Networks for Link Prediction [71.00689542259052]
We consider the graph link prediction task, which is a classic graph analytical problem with many real-world applications.
In this formalism, a link prediction problem is converted to a graph classification task.
We propose to seek a radically different and novel path by making use of the line graphs in graph theory.
In particular, each node in a line graph corresponds to a unique edge in the original graph. Therefore, link prediction problems in the original graph can be equivalently solved as a node classification problem in its corresponding line graph, instead of a graph classification task.
arXiv Detail & Related papers (2020-10-20T05:54:31Z) - Contrastive Self-supervised Learning for Graph Classification [21.207647143672585]
We propose two approaches based on contrastive self-supervised learning (CSSL) to alleviate overfitting.
In the first approach, we use CSSL to pretrain graph encoders on widely-available unlabeled graphs without relying on human-provided labels.
In the second approach, we develop a regularizer based on CSSL, and solve the supervised classification task and the unsupervised CSSL task simultaneously.
arXiv Detail & Related papers (2020-09-13T05:12:55Z) - Certified Robustness of Graph Classification against Topology Attack
with Randomized Smoothing [22.16111584447466]
Graph-based machine learning models are vulnerable to adversarial perturbations due to the non i.i.d nature of graph data.
We build a smoothed graph classification model with certified robustness guarantee.
We also evaluate the effectiveness of our approach under graph convolutional network (GCN) based multi-class graph classification model.
arXiv Detail & Related papers (2020-09-12T22:18:54Z) - Unsupervised Graph Embedding via Adaptive Graph Learning [85.28555417981063]
Graph autoencoders (GAEs) are powerful tools in representation learning for graph embedding.
In this paper, two novel unsupervised graph embedding methods, unsupervised graph embedding via adaptive graph learning (BAGE) and unsupervised graph embedding via variational adaptive graph learning (VBAGE) are proposed.
Experimental studies on several datasets validate our design and demonstrate that our methods outperform baselines by a wide margin in node clustering, node classification, and graph visualization tasks.
arXiv Detail & Related papers (2020-03-10T02:33:14Z) - Adversarial Attacks on Graph Neural Networks via Meta Learning [4.139895092509202]
We investigate training time attacks on graph neural networks for node classification perturbing the discrete graph structure.
Our core principle is to use meta-gradients to solve the bilevel problem underlying training-time attacks.
arXiv Detail & Related papers (2019-02-22T09:20:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.