Privacy-Preserved Neural Graph Similarity Learning
- URL: http://arxiv.org/abs/2210.11730v1
- Date: Fri, 21 Oct 2022 04:38:25 GMT
- Title: Privacy-Preserved Neural Graph Similarity Learning
- Authors: Yupeng Hou, Wayne Xin Zhao, Yaliang Li, Ji-Rong Wen
- Abstract summary: We propose a novel Privacy-Preserving neural Graph Matching network model, named PPGM, for graph similarity learning.
To prevent reconstruction attacks, the proposed model does not communicate node-level representations between devices.
To alleviate the attacks to graph properties, the obfuscated features that contain information from both vectors are communicated.
- Score: 99.78599103903777
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: To develop effective and efficient graph similarity learning (GSL) models, a
series of data-driven neural algorithms have been proposed in recent years.
Although GSL models are frequently deployed in privacy-sensitive scenarios, the
user privacy protection of neural GSL models has not drawn much attention. To
comprehensively understand the privacy protection issues, we first introduce
the concept of attackable representation to systematically characterize the
privacy attacks that each model can face. Inspired by the qualitative results,
we propose a novel Privacy-Preserving neural Graph Matching network model,
named PPGM, for graph similarity learning. To prevent reconstruction attacks,
the proposed model does not communicate node-level representations between
devices. Instead, we learn multi-perspective graph representations based on
learnable context vectors. To alleviate the attacks to graph properties, the
obfuscated features that contain information from both graphs are communicated.
In this way, the private properties of each graph can be difficult to infer.
Based on the node-graph matching techniques while calculating the obfuscated
features, PPGM can also be effective in similarity measuring. To quantitatively
evaluate the privacy-preserving ability of neural GSL models, we further
propose an evaluation protocol via training supervised black-box attack models.
Extensive experiments on widely-used benchmarks show the effectiveness and
strong privacy-protection ability of the proposed model PPGM. The code is
available at: https://github.com/RUCAIBox/PPGM.
Related papers
- Independent Distribution Regularization for Private Graph Embedding [55.24441467292359]
Graph embeddings are susceptible to attribute inference attacks, which allow attackers to infer private node attributes from the learned graph embeddings.
To address these concerns, privacy-preserving graph embedding methods have emerged.
We propose a novel approach called Private Variational Graph AutoEncoders (PVGAE) with the aid of independent distribution penalty as a regularization term.
arXiv Detail & Related papers (2023-08-16T13:32:43Z) - Resisting Graph Adversarial Attack via Cooperative Homophilous
Augmentation [60.50994154879244]
Recent studies show that Graph Neural Networks are vulnerable and easily fooled by small perturbations.
In this work, we focus on the emerging but critical attack, namely, Graph Injection Attack.
We propose a general defense framework CHAGNN against GIA through cooperative homophilous augmentation of graph data and model.
arXiv Detail & Related papers (2022-11-15T11:44:31Z) - Heterogeneous Graph Neural Network for Privacy-Preserving Recommendation [25.95411320126426]
Social networks are considered to be heterogeneous graph neural networks (HGNNs) with deep learning technological advances.
We propose a novel heterogeneous graph neural network privacy-preserving method based on a differential privacy mechanism named HeteDP.
arXiv Detail & Related papers (2022-10-02T14:41:02Z) - Model Inversion Attacks against Graph Neural Networks [65.35955643325038]
We study model inversion attacks against Graph Neural Networks (GNNs)
In this paper, we present GraphMI to infer the private training graph data.
Our experimental results show that such defenses are not sufficiently effective and call for more advanced defenses against privacy attacks.
arXiv Detail & Related papers (2022-09-16T09:13:43Z) - Neural Graph Matching for Pre-training Graph Neural Networks [72.32801428070749]
Graph neural networks (GNNs) have been shown powerful capacity at modeling structural data.
We present a novel Graph Matching based GNN Pre-Training framework, called GMPT.
The proposed method can be applied to fully self-supervised pre-training and coarse-grained supervised pre-training.
arXiv Detail & Related papers (2022-03-03T09:53:53Z) - GraphMI: Extracting Private Graph Data from Graph Neural Networks [59.05178231559796]
We present textbfGraph textbfModel textbfInversion attack (GraphMI), which aims to extract private graph data of the training graph by inverting GNN.
Specifically, we propose a projected gradient module to tackle the discreteness of graph edges while preserving the sparsity and smoothness of graph features.
We design a graph auto-encoder module to efficiently exploit graph topology, node attributes, and target model parameters for edge inference.
arXiv Detail & Related papers (2021-06-05T07:07:52Z) - Locally Private Graph Neural Networks [12.473486843211573]
We study the problem of node data privacy, where graph nodes have potentially sensitive data that is kept private.
We develop a privacy-preserving, architecture-agnostic GNN learning algorithm with formal privacy guarantees.
Experiments conducted over real-world datasets demonstrate that our method can maintain a satisfying level of accuracy with low privacy loss.
arXiv Detail & Related papers (2020-06-09T22:36:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.