Differentially Private Decoupled Graph Convolutions for Multigranular
Topology Protection
- URL: http://arxiv.org/abs/2307.06422v3
- Date: Sat, 14 Oct 2023 19:24:24 GMT
- Title: Differentially Private Decoupled Graph Convolutions for Multigranular
Topology Protection
- Authors: Eli Chien, Wei-Ning Chen, Chao Pan, Pan Li, Ayfer \"Ozg\"ur, Olgica
Milenkovic
- Abstract summary: GNNs can inadvertently expose sensitive user information and interactions through their model predictions.
Applying standard DP approaches to GNNs directly is not advisable due to two main reasons.
We propose a new framework termed Graph Differential Privacy (GDP), specifically tailored to graph learning.
- Score: 38.96828804683783
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: GNNs can inadvertently expose sensitive user information and interactions
through their model predictions. To address these privacy concerns,
Differential Privacy (DP) protocols are employed to control the trade-off
between provable privacy protection and model utility. Applying standard DP
approaches to GNNs directly is not advisable due to two main reasons. First,
the prediction of node labels, which relies on neighboring node attributes
through graph convolutions, can lead to privacy leakage. Second, in practical
applications, the privacy requirements for node attributes and graph topology
may differ. In the latter setting, existing DP-GNN models fail to provide
multigranular trade-offs between graph topology privacy, node attribute
privacy, and GNN utility. To address both limitations, we propose a new
framework termed Graph Differential Privacy (GDP), specifically tailored to
graph learning. GDP ensures both provably private model parameters as well as
private predictions. Additionally, we describe a novel unified notion of graph
dataset adjacency to analyze the properties of GDP for different levels of
graph topology privacy. Our findings reveal that DP-GNNs, which rely on graph
convolutions, not only fail to meet the requirements for multigranular graph
topology privacy but also necessitate the injection of DP noise that scales at
least linearly with the maximum node degree. In contrast, our proposed
Differentially Private Decoupled Graph Convolutions (DPDGCs) represent a more
flexible and efficient alternative to graph convolutions that still provides
the necessary guarantees of GDP. To validate our approach, we conducted
extensive experiments on seven node classification benchmarking and
illustrative synthetic datasets. The results demonstrate that DPDGCs
significantly outperform existing DP-GNNs in terms of privacy-utility
trade-offs.
Related papers
- Blink: Link Local Differential Privacy in Graph Neural Networks via
Bayesian Estimation [79.64626707978418]
We propose using link local differential privacy over decentralized nodes to train graph neural networks.
Our approach spends the privacy budget separately on links and degrees of the graph for the server to better denoise the graph topology.
Our approach outperforms existing methods in terms of accuracy under varying privacy budgets.
arXiv Detail & Related papers (2023-09-06T17:53:31Z) - Independent Distribution Regularization for Private Graph Embedding [55.24441467292359]
Graph embeddings are susceptible to attribute inference attacks, which allow attackers to infer private node attributes from the learned graph embeddings.
To address these concerns, privacy-preserving graph embedding methods have emerged.
We propose a novel approach called Private Variational Graph AutoEncoders (PVGAE) with the aid of independent distribution penalty as a regularization term.
arXiv Detail & Related papers (2023-08-16T13:32:43Z) - Differentially Private Graph Neural Network with Importance-Grained
Noise Adaption [6.319864669924721]
Graph Neural Networks (GNNs) with differential privacy have been proposed to preserve graph privacy when nodes represent personal and sensitive information.
We study the problem of importance-grained privacy, where nodes contain personal data that need to be kept private but are critical for training a GNN.
We propose NAP-GNN, a node-grained privacy-preserving GNN algorithm with privacy guarantees based on adaptive differential privacy to safeguard node information.
arXiv Detail & Related papers (2023-08-09T13:18:41Z) - ProGAP: Progressive Graph Neural Networks with Differential Privacy
Guarantees [8.79398901328539]
Graph Neural Networks (GNNs) have become a popular tool for learning on graphs, but their widespread use raises privacy concerns.
We propose a new differentially private GNN called ProGAP that uses a progressive training scheme to improve such accuracy-privacy trade-offs.
arXiv Detail & Related papers (2023-04-18T12:08:41Z) - Heterogeneous Randomized Response for Differential Privacy in Graph
Neural Networks [18.4005860362025]
Graph neural networks (GNNs) are susceptible to privacy inference attacks (PIAs)
We propose a novel mechanism to protect nodes' features and edges against PIAs under differential privacy (DP) guarantees.
We derive significantly better randomization probabilities and tighter error bounds at both levels of nodes' features and edges.
arXiv Detail & Related papers (2022-11-10T18:52:46Z) - Privacy-Preserved Neural Graph Similarity Learning [99.78599103903777]
We propose a novel Privacy-Preserving neural Graph Matching network model, named PPGM, for graph similarity learning.
To prevent reconstruction attacks, the proposed model does not communicate node-level representations between devices.
To alleviate the attacks to graph properties, the obfuscated features that contain information from both vectors are communicated.
arXiv Detail & Related papers (2022-10-21T04:38:25Z) - Heterogeneous Graph Neural Network for Privacy-Preserving Recommendation [25.95411320126426]
Social networks are considered to be heterogeneous graph neural networks (HGNNs) with deep learning technological advances.
We propose a novel heterogeneous graph neural network privacy-preserving method based on a differential privacy mechanism named HeteDP.
arXiv Detail & Related papers (2022-10-02T14:41:02Z) - Model Inversion Attacks against Graph Neural Networks [65.35955643325038]
We study model inversion attacks against Graph Neural Networks (GNNs)
In this paper, we present GraphMI to infer the private training graph data.
Our experimental results show that such defenses are not sufficiently effective and call for more advanced defenses against privacy attacks.
arXiv Detail & Related papers (2022-09-16T09:13:43Z) - GAP: Differentially Private Graph Neural Networks with Aggregation
Perturbation [19.247325210343035]
Graph Neural Networks (GNNs) are powerful models designed for graph data that learn node representation.
Recent studies have shown that GNNs can raise significant privacy concerns when graph data contain sensitive information.
We propose GAP, a novel differentially private GNN that safeguards privacy of nodes and edges.
arXiv Detail & Related papers (2022-03-02T08:58:07Z) - Permutation-equivariant and Proximity-aware Graph Neural Networks with
Stochastic Message Passing [88.30867628592112]
Graph neural networks (GNNs) are emerging machine learning models on graphs.
Permutation-equivariance and proximity-awareness are two important properties highly desirable for GNNs.
We show that existing GNNs, mostly based on the message-passing mechanism, cannot simultaneously preserve the two properties.
In order to preserve node proximities, we augment the existing GNNs with node representations.
arXiv Detail & Related papers (2020-09-05T16:46:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.