Training Differentially Private Graph Neural Networks with Random Walk
Sampling
- URL: http://arxiv.org/abs/2301.00738v1
- Date: Mon, 2 Jan 2023 16:14:50 GMT
- Title: Training Differentially Private Graph Neural Networks with Random Walk
Sampling
- Authors: Morgane Ayle, Jan Schuchardt, Lukas Gosch, Daniel Z\"ugner, Stephan
G\"unnemann
- Abstract summary: Differentially private descent is the de facto standard for training neural networks without leaking sensitive information about the training data.
In practice, this limits privacy-preserving deep learning on graphs to very shallow graph neural networks.
We propose to solve this issue by training graph neural networks on disjoint subgraphs of a given training graph.
- Score: 1.8059331230167266
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning models are known to put the privacy of their training data at
risk, which poses challenges for their safe and ethical release to the public.
Differentially private stochastic gradient descent is the de facto standard for
training neural networks without leaking sensitive information about the
training data. However, applying it to models for graph-structured data poses a
novel challenge: unlike with i.i.d. data, sensitive information about a node in
a graph cannot only leak through its gradients, but also through the gradients
of all nodes within a larger neighborhood. In practice, this limits
privacy-preserving deep learning on graphs to very shallow graph neural
networks. We propose to solve this issue by training graph neural networks on
disjoint subgraphs of a given training graph. We develop three
random-walk-based methods for generating such disjoint subgraphs and perform a
careful analysis of the data-generating distributions to provide strong privacy
guarantees. Through extensive experiments, we show that our method greatly
outperforms the state-of-the-art baseline on three large graphs, and matches or
outperforms it on four smaller ones.
Related papers
- Stealing Training Graphs from Graph Neural Networks [54.52392250297907]
Graph Neural Networks (GNNs) have shown promising results in modeling graphs in various tasks.
As neural networks can memorize the training samples, the model parameters of GNNs have a high risk of leaking private training data.
We investigate a novel problem of stealing graphs from trained GNNs.
arXiv Detail & Related papers (2024-11-17T23:15:36Z) - Uncertainty-Aware Robust Learning on Noisy Graphs [22.848589361600382]
We propose a novel uncertainty-aware graph learning framework inspired by distributionally robust optimization.
We use a graph neural network-based encoder to embed the node features and find the optimal node embeddings.
Such an uncertainty-aware learning process leads to improved node representations and a more robust graph predictive model.
arXiv Detail & Related papers (2023-06-14T02:45:14Z) - Reconstructing Training Data from Model Gradient, Provably [68.21082086264555]
We reconstruct the training samples from a single gradient query at a randomly chosen parameter value.
As a provable attack that reveals sensitive training data, our findings suggest potential severe threats to privacy.
arXiv Detail & Related papers (2022-12-07T15:32:22Z) - Position-Aware Subgraph Neural Networks with Data-Efficient Learning [15.58680146160525]
We propose a Position-Aware Data-Efficient Learning framework for subgraph neural networks called PADEL.
Specifically, we propose a novel node position encoding method that is anchor-free, and design a new generative subgraph augmentation method based on a diffused variational subgraph autoencoder.
arXiv Detail & Related papers (2022-11-01T16:34:42Z) - Diving into Unified Data-Model Sparsity for Class-Imbalanced Graph
Representation Learning [30.23894624193583]
Graph Neural Networks (GNNs) training upon non-Euclidean graph data often encounters relatively higher time costs.
We develop a unified data-model dynamic sparsity framework named Graph Decantation (GraphDec) to address challenges brought by training upon a massive class-imbalanced graph data.
arXiv Detail & Related papers (2022-10-01T01:47:00Z) - Model Inversion Attacks against Graph Neural Networks [65.35955643325038]
We study model inversion attacks against Graph Neural Networks (GNNs)
In this paper, we present GraphMI to infer the private training graph data.
Our experimental results show that such defenses are not sufficiently effective and call for more advanced defenses against privacy attacks.
arXiv Detail & Related papers (2022-09-16T09:13:43Z) - Neural Graph Matching for Pre-training Graph Neural Networks [72.32801428070749]
Graph neural networks (GNNs) have been shown powerful capacity at modeling structural data.
We present a novel Graph Matching based GNN Pre-Training framework, called GMPT.
The proposed method can be applied to fully self-supervised pre-training and coarse-grained supervised pre-training.
arXiv Detail & Related papers (2022-03-03T09:53:53Z) - Distributionally Robust Semi-Supervised Learning Over Graphs [68.29280230284712]
Semi-supervised learning (SSL) over graph-structured data emerges in many network science applications.
To efficiently manage learning over graphs, variants of graph neural networks (GNNs) have been developed recently.
Despite their success in practice, most of existing methods are unable to handle graphs with uncertain nodal attributes.
Challenges also arise due to distributional uncertainties associated with data acquired by noisy measurements.
A distributionally robust learning framework is developed, where the objective is to train models that exhibit quantifiable robustness against perturbations.
arXiv Detail & Related papers (2021-10-20T14:23:54Z) - Scalable Consistency Training for Graph Neural Networks via
Self-Ensemble Self-Distillation [13.815063206114713]
We introduce a novel consistency training method to improve accuracy of graph neural networks (GNNs)
For a target node we generate different neighborhood expansions, and distill the knowledge of the average of the predictions to the GNN.
Our method approximates the expected prediction of the possible neighborhood samples and practically only requires a few samples.
arXiv Detail & Related papers (2021-10-12T19:24:42Z) - GraphMI: Extracting Private Graph Data from Graph Neural Networks [59.05178231559796]
We present textbfGraph textbfModel textbfInversion attack (GraphMI), which aims to extract private graph data of the training graph by inverting GNN.
Specifically, we propose a projected gradient module to tackle the discreteness of graph edges while preserving the sparsity and smoothness of graph features.
We design a graph auto-encoder module to efficiently exploit graph topology, node attributes, and target model parameters for edge inference.
arXiv Detail & Related papers (2021-06-05T07:07:52Z) - Scalable Graph Neural Network Training: The Case for Sampling [4.9201378771958675]
Graph Neural Networks (GNNs) are a new and increasingly popular family of deep neural network architectures to perform learning on graphs.
Training them efficiently is challenging due to the irregular nature of graph data.
Two different approaches have emerged in the literature: whole-graph and sample-based training.
arXiv Detail & Related papers (2021-05-05T20:44:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.