SoK: Differential Privacy on Graph-Structured Data
- URL: http://arxiv.org/abs/2203.09205v1
- Date: Thu, 17 Mar 2022 09:56:32 GMT
- Title: SoK: Differential Privacy on Graph-Structured Data
- Authors: Tamara T. Mueller, Dmitrii Usynin, Johannes C. Paetzold, Daniel
Rueckert, and Georgios Kaissis
- Abstract summary: We study the applications of differential privacy (DP) in the context of graph-structured data.
A lack of prior systematisation work motivated us to study graph-based learning from a privacy perspective.
- Score: 6.177995200238526
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this work, we study the applications of differential privacy (DP) in the
context of graph-structured data. We discuss the formulations of DP applicable
to the publication of graphs and their associated statistics as well as machine
learning on graph-based data, including graph neural networks (GNNs). The
formulation of DP in the context of graph-structured data is difficult, as
individual data points are interconnected (often non-linearly or sparsely).
This connectivity complicates the computation of individual privacy loss in
differentially private learning. The problem is exacerbated by an absence of a
single, well-established formulation of DP in graph settings. This issue
extends to the domain of GNNs, rendering private machine learning on
graph-structured data a challenging task. A lack of prior systematisation work
motivated us to study graph-based learning from a privacy perspective. In this
work, we systematise different formulations of DP on graphs, discuss challenges
and promising applications, including the GNN domain. We compare and separate
works into graph analysis tasks and graph learning tasks with GNNs. Finally, we
conclude our work with a discussion of open questions and potential directions
for further research in this area.
Related papers
- Loss-aware Curriculum Learning for Heterogeneous Graph Neural Networks [30.333265803394998]
This paper investigates the application of curriculum learning techniques to improve the performance of Heterogeneous Graph Neural Networks (GNNs)
To better classify the quality of the data, we design a loss-aware training schedule, named LTS, that measures the quality of every nodes of the data.
Our findings demonstrate the efficacy of curriculum learning in enhancing HGNNs capabilities for analyzing complex graph-structured data.
arXiv Detail & Related papers (2024-02-29T05:44:41Z) - Privacy-Preserving Graph Machine Learning from Data to Computation: A
Survey [67.7834898542701]
We focus on reviewing privacy-preserving techniques of graph machine learning.
We first review methods for generating privacy-preserving graph data.
Then we describe methods for transmitting privacy-preserved information.
arXiv Detail & Related papers (2023-07-10T04:30:23Z) - Model Inversion Attacks against Graph Neural Networks [65.35955643325038]
We study model inversion attacks against Graph Neural Networks (GNNs)
In this paper, we present GraphMI to infer the private training graph data.
Our experimental results show that such defenses are not sufficiently effective and call for more advanced defenses against privacy attacks.
arXiv Detail & Related papers (2022-09-16T09:13:43Z) - Learning Graph Structure from Convolutional Mixtures [119.45320143101381]
We propose a graph convolutional relationship between the observed and latent graphs, and formulate the graph learning task as a network inverse (deconvolution) problem.
In lieu of eigendecomposition-based spectral methods, we unroll and truncate proximal gradient iterations to arrive at a parameterized neural network architecture that we call a Graph Deconvolution Network (GDN)
GDNs can learn a distribution of graphs in a supervised fashion, perform link prediction or edge-weight regression tasks by adapting the loss function, and they are inherently inductive.
arXiv Detail & Related papers (2022-05-19T14:08:15Z) - Discovering the Representation Bottleneck of Graph Neural Networks from
Multi-order Interactions [51.597480162777074]
Graph neural networks (GNNs) rely on the message passing paradigm to propagate node features and build interactions.
Recent works point out that different graph learning tasks require different ranges of interactions between nodes.
We study two common graph construction methods in scientific domains, i.e., emphK-nearest neighbor (KNN) graphs and emphfully-connected (FC) graphs.
arXiv Detail & Related papers (2022-05-15T11:38:14Z) - Data Augmentation for Deep Graph Learning: A Survey [66.04015540536027]
We first propose a taxonomy for graph data augmentation and then provide a structured review by categorizing the related work based on the augmented information modalities.
Focusing on the two challenging problems in DGL (i.e., optimal graph learning and low-resource graph learning), we also discuss and review the existing learning paradigms which are based on graph data augmentation.
arXiv Detail & Related papers (2022-02-16T18:30:33Z) - Differentially Private Graph Classification with GNNs [5.830410490229634]
Graph Networks (GNNs) have established themselves as the state-of-the-art models for many machine learning applications.
We introduce differential privacy for graph-level classification, one of the key applications of machine learning on graphs.
We show results on a variety of synthetic and public datasets and evaluate the impact of different GNN architectures.
arXiv Detail & Related papers (2022-02-05T15:16:40Z) - GraphMI: Extracting Private Graph Data from Graph Neural Networks [59.05178231559796]
We present textbfGraph textbfModel textbfInversion attack (GraphMI), which aims to extract private graph data of the training graph by inverting GNN.
Specifically, we propose a projected gradient module to tackle the discreteness of graph edges while preserving the sparsity and smoothness of graph features.
We design a graph auto-encoder module to efficiently exploit graph topology, node attributes, and target model parameters for edge inference.
arXiv Detail & Related papers (2021-06-05T07:07:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.