Independent Distribution Regularization for Private Graph Embedding
- URL: http://arxiv.org/abs/2308.08360v1
- Date: Wed, 16 Aug 2023 13:32:43 GMT
- Title: Independent Distribution Regularization for Private Graph Embedding
- Authors: Qi Hu, Yangqiu Song
- Abstract summary: Graph embeddings are susceptible to attribute inference attacks, which allow attackers to infer private node attributes from the learned graph embeddings.
To address these concerns, privacy-preserving graph embedding methods have emerged.
We propose a novel approach called Private Variational Graph AutoEncoders (PVGAE) with the aid of independent distribution penalty as a regularization term.
- Score: 55.24441467292359
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Learning graph embeddings is a crucial task in graph mining tasks. An
effective graph embedding model can learn low-dimensional representations from
graph-structured data for data publishing benefiting various downstream
applications such as node classification, link prediction, etc. However, recent
studies have revealed that graph embeddings are susceptible to attribute
inference attacks, which allow attackers to infer private node attributes from
the learned graph embeddings. To address these concerns, privacy-preserving
graph embedding methods have emerged, aiming to simultaneously consider primary
learning and privacy protection through adversarial learning. However, most
existing methods assume that representation models have access to all sensitive
attributes in advance during the training stage, which is not always the case
due to diverse privacy preferences. Furthermore, the commonly used adversarial
learning technique in privacy-preserving representation learning suffers from
unstable training issues. In this paper, we propose a novel approach called
Private Variational Graph AutoEncoders (PVGAE) with the aid of independent
distribution penalty as a regularization term. Specifically, we split the
original variational graph autoencoder (VGAE) to learn sensitive and
non-sensitive latent representations using two sets of encoders. Additionally,
we introduce a novel regularization to enforce the independence of the
encoders. We prove the theoretical effectiveness of regularization from the
perspective of mutual information. Experimental results on three real-world
datasets demonstrate that PVGAE outperforms other baselines in private
embedding learning regarding utility performance and privacy protection.
Related papers
- Privacy-Preserving Graph Embedding based on Local Differential Privacy [26.164722283887333]
We introduce a novel privacy-preserving graph embedding framework, named PrivGE, to protect node data privacy.
Specifically, we propose an LDP mechanism to obfuscate node data and utilize personalized PageRank as the proximity measure to learn node representations.
Experiments on several real-world graph datasets demonstrate that PrivGE achieves an optimal balance between privacy and utility.
arXiv Detail & Related papers (2023-10-17T08:06:08Z) - Free Lunch for Privacy Preserving Distributed Graph Learning [1.8292714902548342]
We present a novel privacy-respecting framework for distributed graph learning and graph-based machine learning.
This framework aims to learn features as well as distances without requiring actual features while preserving the original structural properties of the raw data.
arXiv Detail & Related papers (2023-05-18T10:41:21Z) - Privacy-Preserved Neural Graph Similarity Learning [99.78599103903777]
We propose a novel Privacy-Preserving neural Graph Matching network model, named PPGM, for graph similarity learning.
To prevent reconstruction attacks, the proposed model does not communicate node-level representations between devices.
To alleviate the attacks to graph properties, the obfuscated features that contain information from both vectors are communicated.
arXiv Detail & Related papers (2022-10-21T04:38:25Z) - Heterogeneous Graph Neural Network for Privacy-Preserving Recommendation [25.95411320126426]
Social networks are considered to be heterogeneous graph neural networks (HGNNs) with deep learning technological advances.
We propose a novel heterogeneous graph neural network privacy-preserving method based on a differential privacy mechanism named HeteDP.
arXiv Detail & Related papers (2022-10-02T14:41:02Z) - Model Inversion Attacks against Graph Neural Networks [65.35955643325038]
We study model inversion attacks against Graph Neural Networks (GNNs)
In this paper, we present GraphMI to infer the private training graph data.
Our experimental results show that such defenses are not sufficiently effective and call for more advanced defenses against privacy attacks.
arXiv Detail & Related papers (2022-09-16T09:13:43Z) - Differentially Private Graph Classification with GNNs [5.830410490229634]
Graph Networks (GNNs) have established themselves as the state-of-the-art models for many machine learning applications.
We introduce differential privacy for graph-level classification, one of the key applications of machine learning on graphs.
We show results on a variety of synthetic and public datasets and evaluate the impact of different GNN architectures.
arXiv Detail & Related papers (2022-02-05T15:16:40Z) - Unbiased Graph Embedding with Biased Graph Observations [52.82841737832561]
We propose a principled new way for obtaining unbiased representations by learning from an underlying bias-free graph.
Based on this new perspective, we propose two complementary methods for uncovering such an underlying graph.
arXiv Detail & Related papers (2021-10-26T18:44:37Z) - GraphMI: Extracting Private Graph Data from Graph Neural Networks [59.05178231559796]
We present textbfGraph textbfModel textbfInversion attack (GraphMI), which aims to extract private graph data of the training graph by inverting GNN.
Specifically, we propose a projected gradient module to tackle the discreteness of graph edges while preserving the sparsity and smoothness of graph features.
We design a graph auto-encoder module to efficiently exploit graph topology, node attributes, and target model parameters for edge inference.
arXiv Detail & Related papers (2021-06-05T07:07:52Z) - Information Obfuscation of Graph Neural Networks [96.8421624921384]
We study the problem of protecting sensitive attributes by information obfuscation when learning with graph structured data.
We propose a framework to locally filter out pre-determined sensitive attributes via adversarial training with the total variation and the Wasserstein distance.
arXiv Detail & Related papers (2020-09-28T17:55:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.