LinkTeller: Recovering Private Edges from Graph Neural Networks via
Influence Analysis
- URL: http://arxiv.org/abs/2108.06504v1
- Date: Sat, 14 Aug 2021 09:53:42 GMT
- Title: LinkTeller: Recovering Private Edges from Graph Neural Networks via
Influence Analysis
- Authors: Fan Wu, Yunhui Long, Ce Zhang, Bo Li
- Abstract summary: We focus on the edge privacy, and consider a training scenario where Bob with node features will first send training node features to Alice who owns the adjacency information.
We first propose a privacy attack LinkTeller via influence analysis to infer the private edge information held by Alice.
We then empirically show that LinkTeller is able to recover a significant amount of private edges, outperforming existing baselines.
- Score: 15.923158902023669
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph structured data have enabled several successful applications such as
recommendation systems and traffic prediction, given the rich node features and
edges information. However, these high-dimensional features and high-order
adjacency information are usually heterogeneous and held by different data
holders in practice. Given such vertical data partition (e.g., one data holder
will only own either the node features or edge information), different data
holders have to develop efficient joint training protocols rather than directly
transfer data to each other due to privacy concerns. In this paper, we focus on
the edge privacy, and consider a training scenario where Bob with node features
will first send training node features to Alice who owns the adjacency
information. Alice will then train a graph neural network (GNN) with the joint
information and release an inference API. During inference, Bob is able to
provide test node features and query the API to obtain the predictions for test
nodes. Under this setting, we first propose a privacy attack LinkTeller via
influence analysis to infer the private edge information held by Alice via
designing adversarial queries for Bob. We then empirically show that LinkTeller
is able to recover a significant amount of private edges, outperforming
existing baselines. To further evaluate the privacy leakage, we adapt an
existing algorithm for differentially private graph convolutional network (DP
GCN) training and propose a new DP GCN mechanism LapGraph. We show that these
DP GCN mechanisms are not always resilient against LinkTeller empirically under
mild privacy guarantees ($\varepsilon>5$). Our studies will shed light on
future research towards designing more resilient privacy-preserving GCN models;
in the meantime, provide an in-depth understanding of the tradeoff between GCN
model utility and robustness against potential privacy attacks.
Related papers
- Blink: Link Local Differential Privacy in Graph Neural Networks via
Bayesian Estimation [79.64626707978418]
We propose using link local differential privacy over decentralized nodes to train graph neural networks.
Our approach spends the privacy budget separately on links and degrees of the graph for the server to better denoise the graph topology.
Our approach outperforms existing methods in terms of accuracy under varying privacy budgets.
arXiv Detail & Related papers (2023-09-06T17:53:31Z) - Independent Distribution Regularization for Private Graph Embedding [55.24441467292359]
Graph embeddings are susceptible to attribute inference attacks, which allow attackers to infer private node attributes from the learned graph embeddings.
To address these concerns, privacy-preserving graph embedding methods have emerged.
We propose a novel approach called Private Variational Graph AutoEncoders (PVGAE) with the aid of independent distribution penalty as a regularization term.
arXiv Detail & Related papers (2023-08-16T13:32:43Z) - Differentially Private Graph Neural Network with Importance-Grained
Noise Adaption [6.319864669924721]
Graph Neural Networks (GNNs) with differential privacy have been proposed to preserve graph privacy when nodes represent personal and sensitive information.
We study the problem of importance-grained privacy, where nodes contain personal data that need to be kept private but are critical for training a GNN.
We propose NAP-GNN, a node-grained privacy-preserving GNN algorithm with privacy guarantees based on adaptive differential privacy to safeguard node information.
arXiv Detail & Related papers (2023-08-09T13:18:41Z) - ProGAP: Progressive Graph Neural Networks with Differential Privacy
Guarantees [8.79398901328539]
Graph Neural Networks (GNNs) have become a popular tool for learning on graphs, but their widespread use raises privacy concerns.
We propose a new differentially private GNN called ProGAP that uses a progressive training scheme to improve such accuracy-privacy trade-offs.
arXiv Detail & Related papers (2023-04-18T12:08:41Z) - Privacy-Preserved Neural Graph Similarity Learning [99.78599103903777]
We propose a novel Privacy-Preserving neural Graph Matching network model, named PPGM, for graph similarity learning.
To prevent reconstruction attacks, the proposed model does not communicate node-level representations between devices.
To alleviate the attacks to graph properties, the obfuscated features that contain information from both vectors are communicated.
arXiv Detail & Related papers (2022-10-21T04:38:25Z) - Heterogeneous Graph Neural Network for Privacy-Preserving Recommendation [25.95411320126426]
Social networks are considered to be heterogeneous graph neural networks (HGNNs) with deep learning technological advances.
We propose a novel heterogeneous graph neural network privacy-preserving method based on a differential privacy mechanism named HeteDP.
arXiv Detail & Related papers (2022-10-02T14:41:02Z) - Model Inversion Attacks against Graph Neural Networks [65.35955643325038]
We study model inversion attacks against Graph Neural Networks (GNNs)
In this paper, we present GraphMI to infer the private training graph data.
Our experimental results show that such defenses are not sufficiently effective and call for more advanced defenses against privacy attacks.
arXiv Detail & Related papers (2022-09-16T09:13:43Z) - GAP: Differentially Private Graph Neural Networks with Aggregation
Perturbation [19.247325210343035]
Graph Neural Networks (GNNs) are powerful models designed for graph data that learn node representation.
Recent studies have shown that GNNs can raise significant privacy concerns when graph data contain sensitive information.
We propose GAP, a novel differentially private GNN that safeguards privacy of nodes and edges.
arXiv Detail & Related papers (2022-03-02T08:58:07Z) - GraphMI: Extracting Private Graph Data from Graph Neural Networks [59.05178231559796]
We present textbfGraph textbfModel textbfInversion attack (GraphMI), which aims to extract private graph data of the training graph by inverting GNN.
Specifically, we propose a projected gradient module to tackle the discreteness of graph edges while preserving the sparsity and smoothness of graph features.
We design a graph auto-encoder module to efficiently exploit graph topology, node attributes, and target model parameters for edge inference.
arXiv Detail & Related papers (2021-06-05T07:07:52Z) - Locally Private Graph Neural Networks [12.473486843211573]
We study the problem of node data privacy, where graph nodes have potentially sensitive data that is kept private.
We develop a privacy-preserving, architecture-agnostic GNN learning algorithm with formal privacy guarantees.
Experiments conducted over real-world datasets demonstrate that our method can maintain a satisfying level of accuracy with low privacy loss.
arXiv Detail & Related papers (2020-06-09T22:36:06Z) - GPS-Net: Graph Property Sensing Network for Scene Graph Generation [91.60326359082408]
Scene graph generation (SGG) aims to detect objects in an image along with their pairwise relationships.
GPS-Net fully explores three properties for SGG: edge direction information, the difference in priority between nodes, and the long-tailed distribution of relationships.
GPS-Net achieves state-of-the-art performance on three popular databases: VG, OI, and VRD by significant gains under various settings and metrics.
arXiv Detail & Related papers (2020-03-29T07:22:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.