Differentially Private Graph Classification with GNNs
- URL: http://arxiv.org/abs/2202.02575v2
- Date: Tue, 8 Feb 2022 08:26:07 GMT
- Title: Differentially Private Graph Classification with GNNs
- Authors: Tamara T. Mueller, Johannes C. Paetzold, Chinmay Prabhakar, Dmitrii
Usynin, Daniel Rueckert, and Georgios Kaissis
- Abstract summary: Graph Networks (GNNs) have established themselves as the state-of-the-art models for many machine learning applications.
We introduce differential privacy for graph-level classification, one of the key applications of machine learning on graphs.
We show results on a variety of synthetic and public datasets and evaluate the impact of different GNN architectures.
- Score: 5.830410490229634
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graph Neural Networks (GNNs) have established themselves as the
state-of-the-art models for many machine learning applications such as the
analysis of social networks, protein interactions and molecules. Several among
these datasets contain privacy-sensitive data. Machine learning with
differential privacy is a promising technique to allow deriving insight from
sensitive data while offering formal guarantees of privacy protection. However,
the differentially private training of GNNs has so far remained under-explored
due to the challenges presented by the intrinsic structural connectivity of
graphs. In this work, we introduce differential privacy for graph-level
classification, one of the key applications of machine learning on graphs. Our
method is applicable to deep learning on multi-graph datasets and relies on
differentially private stochastic gradient descent (DP-SGD). We show results on
a variety of synthetic and public datasets and evaluate the impact of different
GNN architectures and training hyperparameters on model performance for
differentially private graph classification. Finally, we apply explainability
techniques to assess whether similar representations are learned in the private
and non-private settings and establish robust baselines for future work in this
area.
Related papers
- Local Differential Privacy in Graph Neural Networks: a Reconstruction Approach [17.000441871334683]
We propose a learning framework that can provide node privacy at the user level, while incurring low utility loss.
We focus on a decentralized notion of Differential Privacy, namely Local Differential Privacy.
We develop reconstruction methods to approximate features and labels from perturbed data.
arXiv Detail & Related papers (2023-09-15T17:35:51Z) - Independent Distribution Regularization for Private Graph Embedding [55.24441467292359]
Graph embeddings are susceptible to attribute inference attacks, which allow attackers to infer private node attributes from the learned graph embeddings.
To address these concerns, privacy-preserving graph embedding methods have emerged.
We propose a novel approach called Private Variational Graph AutoEncoders (PVGAE) with the aid of independent distribution penalty as a regularization term.
arXiv Detail & Related papers (2023-08-16T13:32:43Z) - Privacy-Preserving Graph Machine Learning from Data to Computation: A
Survey [67.7834898542701]
We focus on reviewing privacy-preserving techniques of graph machine learning.
We first review methods for generating privacy-preserving graph data.
Then we describe methods for transmitting privacy-preserved information.
arXiv Detail & Related papers (2023-07-10T04:30:23Z) - Privacy-Preserved Neural Graph Similarity Learning [99.78599103903777]
We propose a novel Privacy-Preserving neural Graph Matching network model, named PPGM, for graph similarity learning.
To prevent reconstruction attacks, the proposed model does not communicate node-level representations between devices.
To alleviate the attacks to graph properties, the obfuscated features that contain information from both vectors are communicated.
arXiv Detail & Related papers (2022-10-21T04:38:25Z) - Heterogeneous Graph Neural Network for Privacy-Preserving Recommendation [25.95411320126426]
Social networks are considered to be heterogeneous graph neural networks (HGNNs) with deep learning technological advances.
We propose a novel heterogeneous graph neural network privacy-preserving method based on a differential privacy mechanism named HeteDP.
arXiv Detail & Related papers (2022-10-02T14:41:02Z) - Model Inversion Attacks against Graph Neural Networks [65.35955643325038]
We study model inversion attacks against Graph Neural Networks (GNNs)
In this paper, we present GraphMI to infer the private training graph data.
Our experimental results show that such defenses are not sufficiently effective and call for more advanced defenses against privacy attacks.
arXiv Detail & Related papers (2022-09-16T09:13:43Z) - SoK: Differential Privacy on Graph-Structured Data [6.177995200238526]
We study the applications of differential privacy (DP) in the context of graph-structured data.
A lack of prior systematisation work motivated us to study graph-based learning from a privacy perspective.
arXiv Detail & Related papers (2022-03-17T09:56:32Z) - Gromov-Wasserstein Discrepancy with Local Differential Privacy for
Distributed Structural Graphs [7.4398547397969494]
We propose a privacy-preserving framework to analyze the GW discrepancy of node embedding learned locally from graph neural networks.
Our experiments show that, with strong privacy protections guaranteed by the $varilon$-LDP algorithm, the proposed framework not only preserves privacy in graph learning but also presents a noised structural metric under GW distance.
arXiv Detail & Related papers (2022-02-01T23:32:33Z) - GraphMI: Extracting Private Graph Data from Graph Neural Networks [59.05178231559796]
We present textbfGraph textbfModel textbfInversion attack (GraphMI), which aims to extract private graph data of the training graph by inverting GNN.
Specifically, we propose a projected gradient module to tackle the discreteness of graph edges while preserving the sparsity and smoothness of graph features.
We design a graph auto-encoder module to efficiently exploit graph topology, node attributes, and target model parameters for edge inference.
arXiv Detail & Related papers (2021-06-05T07:07:52Z) - Tensor Graph Convolutional Networks for Multi-relational and Robust
Learning [74.05478502080658]
This paper introduces a tensor-graph convolutional network (TGCN) for scalable semi-supervised learning (SSL) from data associated with a collection of graphs, that are represented by a tensor.
The proposed architecture achieves markedly improved performance relative to standard GCNs, copes with state-of-the-art adversarial attacks, and leads to remarkable SSL performance over protein-to-protein interaction networks.
arXiv Detail & Related papers (2020-03-15T02:33:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.