Adversarial Privacy Preserving Graph Embedding against Inference Attack
- URL: http://arxiv.org/abs/2008.13072v1
- Date: Sun, 30 Aug 2020 00:06:49 GMT
- Title: Adversarial Privacy Preserving Graph Embedding against Inference Attack
- Authors: Kaiyang Li, Guangchun Luo, Yang Ye, Wei Li, Shihao Ji, Zhipeng Cai
- Abstract summary: Graph embedding has been proved extremely useful to learn low-dimensional feature representations from graph structured data.
Existing graph embedding methods do not consider users' privacy to prevent inference attacks.
We propose Adrial Privacy Graph Embedding (APGE), a graph adversarial training framework that integrates the disentangling and purging mechanisms to remove users' private information from learned node representations.
- Score: 9.90348608491218
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, the surge in popularity of Internet of Things (IoT), mobile
devices, social media, etc. has opened up a large source for graph data. Graph
embedding has been proved extremely useful to learn low-dimensional feature
representations from graph structured data. These feature representations can
be used for a variety of prediction tasks from node classification to link
prediction. However, existing graph embedding methods do not consider users'
privacy to prevent inference attacks. That is, adversaries can infer users'
sensitive information by analyzing node representations learned from graph
embedding algorithms. In this paper, we propose Adversarial Privacy Graph
Embedding (APGE), a graph adversarial training framework that integrates the
disentangling and purging mechanisms to remove users' private information from
learned node representations. The proposed method preserves the structural
information and utility attributes of a graph while concealing users' private
attributes from inference attacks. Extensive experiments on real-world graph
datasets demonstrate the superior performance of APGE compared to the
state-of-the-arts. Our source code can be found at
https://github.com/uJ62JHD/Privacy-Preserving-Social-Network-Embedding.
Related papers
- GraphPub: Generation of Differential Privacy Graph with High
Availability [21.829551460549936]
Differential privacy (DP) is a common method to protect privacy on graph data.
Due to the complex topological structure of graph data, applying DP on graphs often affects the message passing and aggregation of GNN models.
We propose graph publisher (GraphPub), which can protect graph topology while ensuring the availability of data is basically unchanged.
arXiv Detail & Related papers (2024-02-28T20:02:55Z) - Independent Distribution Regularization for Private Graph Embedding [55.24441467292359]
Graph embeddings are susceptible to attribute inference attacks, which allow attackers to infer private node attributes from the learned graph embeddings.
To address these concerns, privacy-preserving graph embedding methods have emerged.
We propose a novel approach called Private Variational Graph AutoEncoders (PVGAE) with the aid of independent distribution penalty as a regularization term.
arXiv Detail & Related papers (2023-08-16T13:32:43Z) - Privacy-Preserved Neural Graph Similarity Learning [99.78599103903777]
We propose a novel Privacy-Preserving neural Graph Matching network model, named PPGM, for graph similarity learning.
To prevent reconstruction attacks, the proposed model does not communicate node-level representations between devices.
To alleviate the attacks to graph properties, the obfuscated features that contain information from both vectors are communicated.
arXiv Detail & Related papers (2022-10-21T04:38:25Z) - Model Inversion Attacks against Graph Neural Networks [65.35955643325038]
We study model inversion attacks against Graph Neural Networks (GNNs)
In this paper, we present GraphMI to infer the private training graph data.
Our experimental results show that such defenses are not sufficiently effective and call for more advanced defenses against privacy attacks.
arXiv Detail & Related papers (2022-09-16T09:13:43Z) - Unbiased Graph Embedding with Biased Graph Observations [52.82841737832561]
We propose a principled new way for obtaining unbiased representations by learning from an underlying bias-free graph.
Based on this new perspective, we propose two complementary methods for uncovering such an underlying graph.
arXiv Detail & Related papers (2021-10-26T18:44:37Z) - Inference Attacks Against Graph Neural Networks [33.19531086886817]
Graph embedding is a powerful tool to solve the graph analytics problem.
While sharing graph embedding is intriguing, the associated privacy risks are unexplored.
We systematically investigate the information leakage of the graph embedding by mounting three inference attacks.
arXiv Detail & Related papers (2021-10-06T10:08:11Z) - GraphMI: Extracting Private Graph Data from Graph Neural Networks [59.05178231559796]
We present textbfGraph textbfModel textbfInversion attack (GraphMI), which aims to extract private graph data of the training graph by inverting GNN.
Specifically, we propose a projected gradient module to tackle the discreteness of graph edges while preserving the sparsity and smoothness of graph features.
We design a graph auto-encoder module to efficiently exploit graph topology, node attributes, and target model parameters for edge inference.
arXiv Detail & Related papers (2021-06-05T07:07:52Z) - Graph Structure Learning for Robust Graph Neural Networks [63.04935468644495]
Graph Neural Networks (GNNs) are powerful tools in representation learning for graphs.
Recent studies show that GNNs are vulnerable to carefully-crafted perturbations, called adversarial attacks.
We propose a general framework Pro-GNN, which can jointly learn a structural graph and a robust graph neural network model.
arXiv Detail & Related papers (2020-05-20T17:07:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.