Towards Representation Identical Privacy-Preserving Graph Neural Network
via Split Learning
- URL: http://arxiv.org/abs/2107.05917v1
- Date: Tue, 13 Jul 2021 08:35:43 GMT
- Title: Towards Representation Identical Privacy-Preserving Graph Neural Network
via Split Learning
- Authors: Chuanqiang Shan, Huiyun Jiao, Jie Fu
- Abstract summary: We propose SAPGNN for the node level task on horizontally partitioned cross-silo scenario.
It offers a natural extension of centralized GNN to isolated graph with max/min pooling aggregation.
To further enhance the data privacy, a secure pooling aggregation mechanism is proposed.
- Score: 18.07536952887173
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In recent years, the fast rise in number of studies on graph neural network
(GNN) has put it from the theories research to reality application stage.
Despite the encouraging performance achieved by GNN, less attention has been
paid to the privacy-preserving training and inference over distributed graph
data in the related literature. Due to the particularity of graph structure, it
is challenging to extend the existing private learning framework to GNN.
Motivated by the idea of split learning, we propose a \textbf{S}erver
\textbf{A}ided \textbf{P}rivacy-preserving \textbf{GNN} (SAPGNN) for the node
level task on horizontally partitioned cross-silo scenario. It offers a natural
extension of centralized GNN to isolated graph with max/min pooling
aggregation, while guaranteeing that all the private data involved in
computation still stays at local data holders. To further enhancing the data
privacy, a secure pooling aggregation mechanism is proposed. Theoretical and
experimental results show that the proposed model achieves the same accuracy as
the one learned over the combined data.
Related papers
- DEGREE: Decomposition Based Explanation For Graph Neural Networks [55.38873296761104]
We propose DEGREE to provide a faithful explanation for GNN predictions.
By decomposing the information generation and aggregation mechanism of GNNs, DEGREE allows tracking the contributions of specific components of the input graph to the final prediction.
We also design a subgraph level interpretation algorithm to reveal complex interactions between graph nodes that are overlooked by previous methods.
arXiv Detail & Related papers (2023-05-22T10:29:52Z) - ProGAP: Progressive Graph Neural Networks with Differential Privacy
Guarantees [8.79398901328539]
Graph Neural Networks (GNNs) have become a popular tool for learning on graphs, but their widespread use raises privacy concerns.
We propose a new differentially private GNN called ProGAP that uses a progressive training scheme to improve such accuracy-privacy trade-offs.
arXiv Detail & Related papers (2023-04-18T12:08:41Z) - SplitGNN: Splitting GNN for Node Classification with Heterogeneous
Attention [29.307331758493323]
We propose a split learning-based graph neural network (SplitGNN) for graph computation.
Our SplitGNN allows the isolated heterogeneous neighborhood to be collaboratively utilized.
We demonstrate the effectiveness of our SplitGNN on node classification tasks for two standard public datasets and the real-world dataset.
arXiv Detail & Related papers (2023-01-27T12:08:44Z) - Semantic Graph Neural Network with Multi-measure Learning for
Semi-supervised Classification [5.000404730573809]
Graph Neural Networks (GNNs) have attracted increasing attention in recent years.
Recent studies have shown that GNNs are vulnerable to the complex underlying structure of the graph.
We propose a novel framework for semi-supervised classification.
arXiv Detail & Related papers (2022-12-04T06:17:11Z) - Interpolation-based Correlation Reduction Network for Semi-Supervised
Graph Learning [49.94816548023729]
We propose a novel graph contrastive learning method, termed Interpolation-based Correlation Reduction Network (ICRN)
In our method, we improve the discriminative capability of the latent feature by enlarging the margin of decision boundaries.
By combining the two settings, we extract rich supervision information from both the abundant unlabeled nodes and the rare yet valuable labeled nodes for discnative representation learning.
arXiv Detail & Related papers (2022-06-06T14:26:34Z) - GAP: Differentially Private Graph Neural Networks with Aggregation
Perturbation [19.247325210343035]
Graph Neural Networks (GNNs) are powerful models designed for graph data that learn node representation.
Recent studies have shown that GNNs can raise significant privacy concerns when graph data contain sensitive information.
We propose GAP, a novel differentially private GNN that safeguards privacy of nodes and edges.
arXiv Detail & Related papers (2022-03-02T08:58:07Z) - GraphMI: Extracting Private Graph Data from Graph Neural Networks [59.05178231559796]
We present textbfGraph textbfModel textbfInversion attack (GraphMI), which aims to extract private graph data of the training graph by inverting GNN.
Specifically, we propose a projected gradient module to tackle the discreteness of graph edges while preserving the sparsity and smoothness of graph features.
We design a graph auto-encoder module to efficiently exploit graph topology, node attributes, and target model parameters for edge inference.
arXiv Detail & Related papers (2021-06-05T07:07:52Z) - FedGraphNN: A Federated Learning System and Benchmark for Graph Neural
Networks [68.64678614325193]
Graph Neural Network (GNN) research is rapidly growing thanks to the capacity of GNNs to learn representations from graph-structured data.
Centralizing a massive amount of real-world graph data for GNN training is prohibitive due to user-side privacy concerns.
We introduce FedGraphNN, an open research federated learning system and a benchmark to facilitate GNN-based FL research.
arXiv Detail & Related papers (2021-04-14T22:11:35Z) - Contrastive and Generative Graph Convolutional Networks for Graph-based
Semi-Supervised Learning [64.98816284854067]
Graph-based Semi-Supervised Learning (SSL) aims to transfer the labels of a handful of labeled data to the remaining massive unlabeled data via a graph.
A novel GCN-based SSL algorithm is presented in this paper to enrich the supervision signals by utilizing both data similarities and graph structure.
arXiv Detail & Related papers (2020-09-15T13:59:28Z) - Self-supervised Learning on Graphs: Deep Insights and New Direction [66.78374374440467]
Self-supervised learning (SSL) aims to create domain specific pretext tasks on unlabeled data.
There are increasing interests in generalizing deep learning to the graph domain in the form of graph neural networks (GNNs)
arXiv Detail & Related papers (2020-06-17T20:30:04Z) - Locally Private Graph Neural Networks [12.473486843211573]
We study the problem of node data privacy, where graph nodes have potentially sensitive data that is kept private.
We develop a privacy-preserving, architecture-agnostic GNN learning algorithm with formal privacy guarantees.
Experiments conducted over real-world datasets demonstrate that our method can maintain a satisfying level of accuracy with low privacy loss.
arXiv Detail & Related papers (2020-06-09T22:36:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.