Node-Level Differentially Private Graph Neural Networks
- URL: http://arxiv.org/abs/2111.15521v1
- Date: Tue, 23 Nov 2021 16:18:53 GMT
- Title: Node-Level Differentially Private Graph Neural Networks
- Authors: Ameya Daigavane, Gagan Madan, Aditya Sinha, Abhradeep Guha Thakurta,
Gaurav Aggarwal, Prateek Jain
- Abstract summary: Graph Neural Networks (GNNs) are a popular technique for modelling graph-structured data.
This work formally defines the problem of learning 1-layer GNNs with node-level privacy.
We provide an algorithmic solution with a strong differential privacy guarantee.
- Score: 14.917945355629563
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Graph Neural Networks (GNNs) are a popular technique for modelling
graph-structured data that compute node-level representations via aggregation
of information from the local neighborhood of each node. However, this
aggregation implies increased risk of revealing sensitive information, as a
node can participate in the inference for multiple nodes. This implies that
standard privacy preserving machine learning techniques, such as differentially
private stochastic gradient descent (DP-SGD) - which are designed for
situations where each data point participates in the inference for one point
only - either do not apply, or lead to inaccurate solutions. In this work, we
formally define the problem of learning 1-layer GNNs with node-level privacy,
and provide an algorithmic solution with a strong differential privacy
guarantee. Even though each node can be involved in the inference for multiple
nodes, by employing a careful sensitivity analysis anda non-trivial extension
of the privacy-by-amplification technique, our method is able to provide
accurate solutions with solid privacy parameters. Empirical evaluation on
standard benchmarks demonstrates that our method is indeed able to learn
accurate privacy preserving GNNs, while still outperforming standard
non-private methods that completely ignore graph information.
Related papers
- Preserving Node-level Privacy in Graph Neural Networks [8.823710998526705]
We propose a solution that addresses the issue of node-level privacy in Graph Neural Networks (GNNs)
Our protocol consists of two main components: 1) a sampling routine called HeterPoisson, which employs a specialized node sampling strategy and a series of tailored operations to generate a batch of sub-graphs with desired properties, and 2) a randomization routine that utilizes symmetric Laplace noise instead of the commonly used Gaussian noise.
Our protocol enables GNN learning with good performance, as demonstrated by experiments on five real-world datasets.
arXiv Detail & Related papers (2023-11-12T16:21:29Z) - Independent Distribution Regularization for Private Graph Embedding [55.24441467292359]
Graph embeddings are susceptible to attribute inference attacks, which allow attackers to infer private node attributes from the learned graph embeddings.
To address these concerns, privacy-preserving graph embedding methods have emerged.
We propose a novel approach called Private Variational Graph AutoEncoders (PVGAE) with the aid of independent distribution penalty as a regularization term.
arXiv Detail & Related papers (2023-08-16T13:32:43Z) - Differentially Private Graph Neural Network with Importance-Grained
Noise Adaption [6.319864669924721]
Graph Neural Networks (GNNs) with differential privacy have been proposed to preserve graph privacy when nodes represent personal and sensitive information.
We study the problem of importance-grained privacy, where nodes contain personal data that need to be kept private but are critical for training a GNN.
We propose NAP-GNN, a node-grained privacy-preserving GNN algorithm with privacy guarantees based on adaptive differential privacy to safeguard node information.
arXiv Detail & Related papers (2023-08-09T13:18:41Z) - Privacy-Preserved Neural Graph Similarity Learning [99.78599103903777]
We propose a novel Privacy-Preserving neural Graph Matching network model, named PPGM, for graph similarity learning.
To prevent reconstruction attacks, the proposed model does not communicate node-level representations between devices.
To alleviate the attacks to graph properties, the obfuscated features that contain information from both vectors are communicated.
arXiv Detail & Related papers (2022-10-21T04:38:25Z) - DPAR: Decoupled Graph Neural Networks with Node-Level Differential Privacy [30.15971370844865]
We aim to achieve node-level differential privacy (DP) for training GNNs so that a node and its edges are protected.
We propose a Decoupled GNN with Differentially Private Approximate Personalized PageRank (DPAR) for training GNNs with an enhanced privacy-utility tradeoff.
arXiv Detail & Related papers (2022-10-10T05:34:25Z) - GAP: Differentially Private Graph Neural Networks with Aggregation
Perturbation [19.247325210343035]
Graph Neural Networks (GNNs) are powerful models designed for graph data that learn node representation.
Recent studies have shown that GNNs can raise significant privacy concerns when graph data contain sensitive information.
We propose GAP, a novel differentially private GNN that safeguards privacy of nodes and edges.
arXiv Detail & Related papers (2022-03-02T08:58:07Z) - Node2Seq: Towards Trainable Convolutions in Graph Neural Networks [59.378148590027735]
We propose a graph network layer, known as Node2Seq, to learn node embeddings with explicitly trainable weights for different neighboring nodes.
For a target node, our method sorts its neighboring nodes via attention mechanism and then employs 1D convolutional neural networks (CNNs) to enable explicit weights for information aggregation.
In addition, we propose to incorporate non-local information for feature learning in an adaptive manner based on the attention scores.
arXiv Detail & Related papers (2021-01-06T03:05:37Z) - Self-supervised Learning on Graphs: Deep Insights and New Direction [66.78374374440467]
Self-supervised learning (SSL) aims to create domain specific pretext tasks on unlabeled data.
There are increasing interests in generalizing deep learning to the graph domain in the form of graph neural networks (GNNs)
arXiv Detail & Related papers (2020-06-17T20:30:04Z) - Locally Private Graph Neural Networks [12.473486843211573]
We study the problem of node data privacy, where graph nodes have potentially sensitive data that is kept private.
We develop a privacy-preserving, architecture-agnostic GNN learning algorithm with formal privacy guarantees.
Experiments conducted over real-world datasets demonstrate that our method can maintain a satisfying level of accuracy with low privacy loss.
arXiv Detail & Related papers (2020-06-09T22:36:06Z) - EdgeNets:Edge Varying Graph Neural Networks [179.99395949679547]
This paper puts forth a general framework that unifies state-of-the-art graph neural networks (GNNs) through the concept of EdgeNet.
An EdgeNet is a GNN architecture that allows different nodes to use different parameters to weigh the information of different neighbors.
This is a general linear and local operation that a node can perform and encompasses under one formulation all existing graph convolutional neural networks (GCNNs) as well as graph attention networks (GATs)
arXiv Detail & Related papers (2020-01-21T15:51:17Z) - Graph Inference Learning for Semi-supervised Classification [50.55765399527556]
We propose a Graph Inference Learning framework to boost the performance of semi-supervised node classification.
For learning the inference process, we introduce meta-optimization on structure relations from training nodes to validation nodes.
Comprehensive evaluations on four benchmark datasets demonstrate the superiority of our proposed GIL when compared against state-of-the-art methods.
arXiv Detail & Related papers (2020-01-17T02:52:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.