GAP: Differentially Private Graph Neural Networks with Aggregation
Perturbation
- URL: http://arxiv.org/abs/2203.00949v1
- Date: Wed, 2 Mar 2022 08:58:07 GMT
- Title: GAP: Differentially Private Graph Neural Networks with Aggregation
Perturbation
- Authors: Sina Sajadmanesh, Ali Shahin Shamsabadi, Aur\'elien Bellet, Daniel
Gatica-Perez
- Abstract summary: Graph Neural Networks (GNNs) are powerful models designed for graph data that learn node representation.
Recent studies have shown that GNNs can raise significant privacy concerns when graph data contain sensitive information.
We propose GAP, a novel differentially private GNN that safeguards privacy of nodes and edges.
- Score: 19.247325210343035
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph Neural Networks (GNNs) are powerful models designed for graph data that
learn node representation by recursively aggregating information from each
node's local neighborhood. However, despite their state-of-the-art performance
in predictive graph-based applications, recent studies have shown that GNNs can
raise significant privacy concerns when graph data contain sensitive
information. As a result, in this paper, we study the problem of learning GNNs
with Differential Privacy (DP). We propose GAP, a novel differentially private
GNN that safeguards the privacy of nodes and edges using aggregation
perturbation, i.e., adding calibrated stochastic noise to the output of the
GNN's aggregation function, which statistically obfuscates the presence of a
single edge (edge-level privacy) or a single node and all its adjacent edges
(node-level privacy). To circumvent the accumulation of privacy cost at every
forward pass of the model, we tailor the GNN architecture to the specifics of
private learning. In particular, we first precompute private aggregations by
recursively applying neighborhood aggregation and perturbing the output of each
aggregation step. Then, we privately train a deep neural network on the
resulting perturbed aggregations for any node-wise classification task. A major
advantage of GAP over previous approaches is that we guarantee edge-level and
node-level DP not only for training, but also at inference time with no
additional costs beyond the training's privacy budget. We theoretically analyze
the formal privacy guarantees of GAP using R\'enyi DP. Empirical experiments
conducted over three real-world graph datasets demonstrate that GAP achieves a
favorable privacy-accuracy trade-off and significantly outperforms existing
approaches.
Related papers
- Blink: Link Local Differential Privacy in Graph Neural Networks via
Bayesian Estimation [79.64626707978418]
We propose using link local differential privacy over decentralized nodes to train graph neural networks.
Our approach spends the privacy budget separately on links and degrees of the graph for the server to better denoise the graph topology.
Our approach outperforms existing methods in terms of accuracy under varying privacy budgets.
arXiv Detail & Related papers (2023-09-06T17:53:31Z) - A Survey on Privacy in Graph Neural Networks: Attacks, Preservation, and
Applications [76.88662943995641]
Graph Neural Networks (GNNs) have gained significant attention owing to their ability to handle graph-structured data.
To address this issue, researchers have started to develop privacy-preserving GNNs.
Despite this progress, there is a lack of a comprehensive overview of the attacks and the techniques for preserving privacy in the graph domain.
arXiv Detail & Related papers (2023-08-31T00:31:08Z) - Differentially Private Graph Neural Network with Importance-Grained
Noise Adaption [6.319864669924721]
Graph Neural Networks (GNNs) with differential privacy have been proposed to preserve graph privacy when nodes represent personal and sensitive information.
We study the problem of importance-grained privacy, where nodes contain personal data that need to be kept private but are critical for training a GNN.
We propose NAP-GNN, a node-grained privacy-preserving GNN algorithm with privacy guarantees based on adaptive differential privacy to safeguard node information.
arXiv Detail & Related papers (2023-08-09T13:18:41Z) - Differentially Private Decoupled Graph Convolutions for Multigranular
Topology Protection [38.96828804683783]
GNNs can inadvertently expose sensitive user information and interactions through their model predictions.
Applying standard DP approaches to GNNs directly is not advisable due to two main reasons.
We propose a new framework termed Graph Differential Privacy (GDP), specifically tailored to graph learning.
arXiv Detail & Related papers (2023-07-12T19:29:06Z) - ProGAP: Progressive Graph Neural Networks with Differential Privacy
Guarantees [8.79398901328539]
Graph Neural Networks (GNNs) have become a popular tool for learning on graphs, but their widespread use raises privacy concerns.
We propose a new differentially private GNN called ProGAP that uses a progressive training scheme to improve such accuracy-privacy trade-offs.
arXiv Detail & Related papers (2023-04-18T12:08:41Z) - Privacy-Preserved Neural Graph Similarity Learning [99.78599103903777]
We propose a novel Privacy-Preserving neural Graph Matching network model, named PPGM, for graph similarity learning.
To prevent reconstruction attacks, the proposed model does not communicate node-level representations between devices.
To alleviate the attacks to graph properties, the obfuscated features that contain information from both vectors are communicated.
arXiv Detail & Related papers (2022-10-21T04:38:25Z) - Model Inversion Attacks against Graph Neural Networks [65.35955643325038]
We study model inversion attacks against Graph Neural Networks (GNNs)
In this paper, we present GraphMI to infer the private training graph data.
Our experimental results show that such defenses are not sufficiently effective and call for more advanced defenses against privacy attacks.
arXiv Detail & Related papers (2022-09-16T09:13:43Z) - Exploiting Neighbor Effect: Conv-Agnostic GNNs Framework for Graphs with
Heterophily [58.76759997223951]
We propose a new metric based on von Neumann entropy to re-examine the heterophily problem of GNNs.
We also propose a Conv-Agnostic GNN framework (CAGNNs) to enhance the performance of most GNNs on heterophily datasets.
arXiv Detail & Related papers (2022-03-19T14:26:43Z) - Node-Level Differentially Private Graph Neural Networks [14.917945355629563]
Graph Neural Networks (GNNs) are a popular technique for modelling graph-structured data.
This work formally defines the problem of learning 1-layer GNNs with node-level privacy.
We provide an algorithmic solution with a strong differential privacy guarantee.
arXiv Detail & Related papers (2021-11-23T16:18:53Z) - Policy-GNN: Aggregation Optimization for Graph Neural Networks [60.50932472042379]
Graph neural networks (GNNs) aim to model the local graph structures and capture the hierarchical patterns by aggregating the information from neighbors.
It is a challenging task to develop an effective aggregation strategy for each node, given complex graphs and sparse features.
We propose Policy-GNN, a meta-policy framework that models the sampling procedure and message passing of GNNs into a combined learning process.
arXiv Detail & Related papers (2020-06-26T17:03:06Z) - Locally Private Graph Neural Networks [12.473486843211573]
We study the problem of node data privacy, where graph nodes have potentially sensitive data that is kept private.
We develop a privacy-preserving, architecture-agnostic GNN learning algorithm with formal privacy guarantees.
Experiments conducted over real-world datasets demonstrate that our method can maintain a satisfying level of accuracy with low privacy loss.
arXiv Detail & Related papers (2020-06-09T22:36:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.