Gromov-Wasserstein Discrepancy with Local Differential Privacy for
Distributed Structural Graphs
- URL: http://arxiv.org/abs/2202.00808v1
- Date: Tue, 1 Feb 2022 23:32:33 GMT
- Title: Gromov-Wasserstein Discrepancy with Local Differential Privacy for
Distributed Structural Graphs
- Authors: Hongwei Jin, Xun Chen
- Abstract summary: We propose a privacy-preserving framework to analyze the GW discrepancy of node embedding learned locally from graph neural networks.
Our experiments show that, with strong privacy protections guaranteed by the $varilon$-LDP algorithm, the proposed framework not only preserves privacy in graph learning but also presents a noised structural metric under GW distance.
- Score: 7.4398547397969494
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Learning the similarity between structured data, especially the graphs, is
one of the essential problems. Besides the approach like graph kernels,
Gromov-Wasserstein (GW) distance recently draws big attention due to its
flexibility to capture both topological and feature characteristics, as well as
handling the permutation invariance. However, structured data are widely
distributed for different data mining and machine learning applications. With
privacy concerns, accessing the decentralized data is limited to either
individual clients or different silos. To tackle these issues, we propose a
privacy-preserving framework to analyze the GW discrepancy of node embedding
learned locally from graph neural networks in a federated flavor, and then
explicitly place local differential privacy (LDP) based on Multi-bit Encoder to
protect sensitive information. Our experiments show that, with strong privacy
protections guaranteed by the $\varepsilon$-LDP algorithm, the proposed
framework not only preserves privacy in graph learning but also presents a
noised structural metric under GW distance, resulting in comparable and even
better performance in classification and clustering tasks. Moreover, we reason
the rationale behind the LDP-based GW distance analytically and empirically.
Related papers
- Initialization Matters: Privacy-Utility Analysis of Overparameterized
Neural Networks [72.51255282371805]
We prove a privacy bound for the KL divergence between model distributions on worst-case neighboring datasets.
We find that this KL privacy bound is largely determined by the expected squared gradient norm relative to model parameters during training.
arXiv Detail & Related papers (2023-10-31T16:13:22Z) - Local Differential Privacy in Graph Neural Networks: a Reconstruction Approach [17.000441871334683]
We propose a learning framework that can provide node privacy at the user level, while incurring low utility loss.
We focus on a decentralized notion of Differential Privacy, namely Local Differential Privacy.
We develop reconstruction methods to approximate features and labels from perturbed data.
arXiv Detail & Related papers (2023-09-15T17:35:51Z) - Independent Distribution Regularization for Private Graph Embedding [55.24441467292359]
Graph embeddings are susceptible to attribute inference attacks, which allow attackers to infer private node attributes from the learned graph embeddings.
To address these concerns, privacy-preserving graph embedding methods have emerged.
We propose a novel approach called Private Variational Graph AutoEncoders (PVGAE) with the aid of independent distribution penalty as a regularization term.
arXiv Detail & Related papers (2023-08-16T13:32:43Z) - Privacy-Preserved Neural Graph Similarity Learning [99.78599103903777]
We propose a novel Privacy-Preserving neural Graph Matching network model, named PPGM, for graph similarity learning.
To prevent reconstruction attacks, the proposed model does not communicate node-level representations between devices.
To alleviate the attacks to graph properties, the obfuscated features that contain information from both vectors are communicated.
arXiv Detail & Related papers (2022-10-21T04:38:25Z) - Heterogeneous Graph Neural Network for Privacy-Preserving Recommendation [25.95411320126426]
Social networks are considered to be heterogeneous graph neural networks (HGNNs) with deep learning technological advances.
We propose a novel heterogeneous graph neural network privacy-preserving method based on a differential privacy mechanism named HeteDP.
arXiv Detail & Related papers (2022-10-02T14:41:02Z) - GAP: Differentially Private Graph Neural Networks with Aggregation
Perturbation [19.247325210343035]
Graph Neural Networks (GNNs) are powerful models designed for graph data that learn node representation.
Recent studies have shown that GNNs can raise significant privacy concerns when graph data contain sensitive information.
We propose GAP, a novel differentially private GNN that safeguards privacy of nodes and edges.
arXiv Detail & Related papers (2022-03-02T08:58:07Z) - Differentially Private Graph Classification with GNNs [5.830410490229634]
Graph Networks (GNNs) have established themselves as the state-of-the-art models for many machine learning applications.
We introduce differential privacy for graph-level classification, one of the key applications of machine learning on graphs.
We show results on a variety of synthetic and public datasets and evaluate the impact of different GNN architectures.
arXiv Detail & Related papers (2022-02-05T15:16:40Z) - Robustness Threats of Differential Privacy [70.818129585404]
We experimentally demonstrate that networks, trained with differential privacy, in some settings might be even more vulnerable in comparison to non-private versions.
We study how the main ingredients of differentially private neural networks training, such as gradient clipping and noise addition, affect the robustness of the model.
arXiv Detail & Related papers (2020-12-14T18:59:24Z) - Graph-Homomorphic Perturbations for Private Decentralized Learning [64.26238893241322]
Local exchange of estimates allows inference of data based on private data.
perturbations chosen independently at every agent, resulting in a significant performance loss.
We propose an alternative scheme, which constructs perturbations according to a particular nullspace condition, allowing them to be invisible.
arXiv Detail & Related papers (2020-10-23T10:35:35Z) - Information Obfuscation of Graph Neural Networks [96.8421624921384]
We study the problem of protecting sensitive attributes by information obfuscation when learning with graph structured data.
We propose a framework to locally filter out pre-determined sensitive attributes via adversarial training with the total variation and the Wasserstein distance.
arXiv Detail & Related papers (2020-09-28T17:55:04Z) - Locally Private Graph Neural Networks [12.473486843211573]
We study the problem of node data privacy, where graph nodes have potentially sensitive data that is kept private.
We develop a privacy-preserving, architecture-agnostic GNN learning algorithm with formal privacy guarantees.
Experiments conducted over real-world datasets demonstrate that our method can maintain a satisfying level of accuracy with low privacy loss.
arXiv Detail & Related papers (2020-06-09T22:36:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.