Inference Attacks Against Graph Neural Networks
- URL: http://arxiv.org/abs/2110.02631v1
- Date: Wed, 6 Oct 2021 10:08:11 GMT
- Title: Inference Attacks Against Graph Neural Networks
- Authors: Zhikun Zhang and Min Chen and Michael Backes and Yun Shen and Yang
Zhang
- Abstract summary: Graph embedding is a powerful tool to solve the graph analytics problem.
While sharing graph embedding is intriguing, the associated privacy risks are unexplored.
We systematically investigate the information leakage of the graph embedding by mounting three inference attacks.
- Score: 33.19531086886817
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph is an important data representation ubiquitously existing in the real
world. However, analyzing the graph data is computationally difficult due to
its non-Euclidean nature. Graph embedding is a powerful tool to solve the graph
analytics problem by transforming the graph data into low-dimensional vectors.
These vectors could also be shared with third parties to gain additional
insights of what is behind the data. While sharing graph embedding is
intriguing, the associated privacy risks are unexplored. In this paper, we
systematically investigate the information leakage of the graph embedding by
mounting three inference attacks. First, we can successfully infer basic graph
properties, such as the number of nodes, the number of edges, and graph
density, of the target graph with up to 0.89 accuracy. Second, given a subgraph
of interest and the graph embedding, we can determine with high confidence that
whether the subgraph is contained in the target graph. For instance, we achieve
0.98 attack AUC on the DD dataset. Third, we propose a novel graph
reconstruction attack that can reconstruct a graph that has similar graph
structural statistics to the target graph. We further propose an effective
defense mechanism based on graph embedding perturbation to mitigate the
inference attacks without noticeable performance degradation for graph
classification tasks. Our code is available at
https://github.com/Zhangzhk0819/GNN-Embedding-Leaks.
Related papers
- Unlearnable Graph: Protecting Graphs from Unauthorized Exploitation [68.59161853439339]
We propose a novel method for generating unlearnable graph examples.
By injecting delusive but imperceptible noise into graphs using our Error-Minimizing Structural Poisoning (EMinS) module, we are able to make the graphs unexploitable.
arXiv Detail & Related papers (2023-03-05T03:30:22Z) - Model Inversion Attacks against Graph Neural Networks [65.35955643325038]
We study model inversion attacks against Graph Neural Networks (GNNs)
In this paper, we present GraphMI to infer the private training graph data.
Our experimental results show that such defenses are not sufficiently effective and call for more advanced defenses against privacy attacks.
arXiv Detail & Related papers (2022-09-16T09:13:43Z) - Finding MNEMON: Reviving Memories of Node Embeddings [39.206574462957136]
We show that an adversary can recover edges with decent accuracy by only gaining access to the node embedding matrix of the original graph.
We demonstrate the effectiveness and applicability of our graph recovery attack through extensive experiments.
arXiv Detail & Related papers (2022-04-14T13:44:26Z) - Joint 3D Human Shape Recovery from A Single Imag with Bilayer-Graph [35.375489948345404]
We propose a dual-scale graph approach to estimate the 3D human shape and pose from images.
We use a coarse graph, derived from a dense graph, to estimate the human's 3D pose, and the dense graph to estimate the 3D shape.
We train our model end-to-end and show that we can achieve state-of-the-art results for several evaluation datasets.
arXiv Detail & Related papers (2021-10-16T05:04:02Z) - GraphMI: Extracting Private Graph Data from Graph Neural Networks [59.05178231559796]
We present textbfGraph textbfModel textbfInversion attack (GraphMI), which aims to extract private graph data of the training graph by inverting GNN.
Specifically, we propose a projected gradient module to tackle the discreteness of graph edges while preserving the sparsity and smoothness of graph features.
We design a graph auto-encoder module to efficiently exploit graph topology, node attributes, and target model parameters for edge inference.
arXiv Detail & Related papers (2021-06-05T07:07:52Z) - Graphfool: Targeted Label Adversarial Attack on Graph Embedding [11.866894644607894]
We propose Graphfool, a novel targeted label adversarial attack on graph embedding.
It can generate adversarial graph to attack graph embedding methods via classifying boundary and gradient information.
Experiments on real-world graph networks demonstrate that Graphfool can derive better performance than state-of-art techniques.
arXiv Detail & Related papers (2021-02-24T13:45:38Z) - Graph Coarsening with Neural Networks [8.407217618651536]
We propose a framework for measuring the quality of coarsening algorithm and show that depending on the goal, we need to carefully choose the Laplace operator on the coarse graph.
Motivated by the observation that the current choice of edge weight for the coarse graph may be sub-optimal, we parametrize the weight assignment map with graph neural networks and train it to improve the coarsening quality in an unsupervised way.
arXiv Detail & Related papers (2021-02-02T06:50:07Z) - Graph Information Bottleneck for Subgraph Recognition [103.37499715761784]
We propose a framework of Graph Information Bottleneck (GIB) for the subgraph recognition problem in deep graph learning.
Under this framework, one can recognize the maximally informative yet compressive subgraph, named IB-subgraph.
We evaluate the properties of the IB-subgraph in three application scenarios: improvement of graph classification, graph interpretation and graph denoising.
arXiv Detail & Related papers (2020-10-12T09:32:20Z) - Multilevel Graph Matching Networks for Deep Graph Similarity Learning [79.3213351477689]
We propose a multi-level graph matching network (MGMN) framework for computing the graph similarity between any pair of graph-structured objects.
To compensate for the lack of standard benchmark datasets, we have created and collected a set of datasets for both the graph-graph classification and graph-graph regression tasks.
Comprehensive experiments demonstrate that MGMN consistently outperforms state-of-the-art baseline models on both the graph-graph classification and graph-graph regression tasks.
arXiv Detail & Related papers (2020-07-08T19:48:19Z) - Graph Pooling with Node Proximity for Hierarchical Representation
Learning [80.62181998314547]
We propose a novel graph pooling strategy that leverages node proximity to improve the hierarchical representation learning of graph data with their multi-hop topology.
Results show that the proposed graph pooling strategy is able to achieve state-of-the-art performance on a collection of public graph classification benchmark datasets.
arXiv Detail & Related papers (2020-06-19T13:09:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.