Efficient and Privacy-Preserved Link Prediction via Condensed Graphs
- URL: http://arxiv.org/abs/2503.12156v1
- Date: Sat, 15 Mar 2025 14:54:04 GMT
- Title: Efficient and Privacy-Preserved Link Prediction via Condensed Graphs
- Authors: Yunbo Long, Liming Xu, Alexandra Brintrup,
- Abstract summary: We introduce HyDROtextsuperscript+, a graph condensation method guided by algebraic Jaccard similarity.<n>Our method achieves nearly 20* faster training and reduces storage requirements by 452*, compared to link prediction on the original networks.
- Score: 49.898152180805454
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Link prediction is crucial for uncovering hidden connections within complex networks, enabling applications such as identifying potential customers and products. However, this research faces significant challenges, including concerns about data privacy, as well as high computational and storage costs, especially when dealing with large-scale networks. Condensed graphs, which are much smaller than the original graphs while retaining essential information, has become an effective solution to both maintain data utility and preserve privacy. Existing methods, however, initialize synthetic graphs through random node selection without considering node connectivity, and are mainly designed for node classification tasks. As a result, their potential for privacy-preserving link prediction remains largely unexplored. We introduce HyDRO\textsuperscript{+}, a graph condensation method guided by algebraic Jaccard similarity, which leverages local connectivity information to optimize condensed graph structures. Extensive experiments on four real-world networks show that our method outperforms state-of-the-art methods and even the original networks in balancing link prediction accuracy and privacy preservation. Moreover, our method achieves nearly 20* faster training and reduces storage requirements by 452*, as demonstrated on the Computers dataset, compared to link prediction on the original networks. This work represents the first attempt to leverage condensed graphs for privacy-preserving link prediction information sharing in real-world complex networks. It offers a promising pathway for preserving link prediction information while safeguarding privacy, advancing the use of graph condensation in large-scale networks with privacy concerns.
Related papers
- Privacy-Preserving Graph Embedding based on Local Differential Privacy [26.164722283887333]
We introduce a novel privacy-preserving graph embedding framework, named PrivGE, to protect node data privacy.
Specifically, we propose an LDP mechanism to obfuscate node data and utilize personalized PageRank as the proximity measure to learn node representations.
Experiments on several real-world graph datasets demonstrate that PrivGE achieves an optimal balance between privacy and utility.
arXiv Detail & Related papers (2023-10-17T08:06:08Z) - A Survey on Privacy in Graph Neural Networks: Attacks, Preservation, and
Applications [76.88662943995641]
Graph Neural Networks (GNNs) have gained significant attention owing to their ability to handle graph-structured data.
To address this issue, researchers have started to develop privacy-preserving GNNs.
Despite this progress, there is a lack of a comprehensive overview of the attacks and the techniques for preserving privacy in the graph domain.
arXiv Detail & Related papers (2023-08-31T00:31:08Z) - CONVERT:Contrastive Graph Clustering with Reliable Augmentation [110.46658439733106]
We propose a novel CONtrastiVe Graph ClustEring network with Reliable AugmenTation (CONVERT)
In our method, the data augmentations are processed by the proposed reversible perturb-recover network.
To further guarantee the reliability of semantics, a novel semantic loss is presented to constrain the network.
arXiv Detail & Related papers (2023-08-17T13:07:09Z) - Privacy-Preserving Graph Machine Learning from Data to Computation: A
Survey [67.7834898542701]
We focus on reviewing privacy-preserving techniques of graph machine learning.
We first review methods for generating privacy-preserving graph data.
Then we describe methods for transmitting privacy-preserved information.
arXiv Detail & Related papers (2023-07-10T04:30:23Z) - Heterogeneous Graph Neural Network for Privacy-Preserving Recommendation [25.95411320126426]
Social networks are considered to be heterogeneous graph neural networks (HGNNs) with deep learning technological advances.
We propose a novel heterogeneous graph neural network privacy-preserving method based on a differential privacy mechanism named HeteDP.
arXiv Detail & Related papers (2022-10-02T14:41:02Z) - LinkTeller: Recovering Private Edges from Graph Neural Networks via
Influence Analysis [15.923158902023669]
We focus on the edge privacy, and consider a training scenario where Bob with node features will first send training node features to Alice who owns the adjacency information.
We first propose a privacy attack LinkTeller via influence analysis to infer the private edge information held by Alice.
We then empirically show that LinkTeller is able to recover a significant amount of private edges, outperforming existing baselines.
arXiv Detail & Related papers (2021-08-14T09:53:42Z) - GraphMI: Extracting Private Graph Data from Graph Neural Networks [59.05178231559796]
We present textbfGraph textbfModel textbfInversion attack (GraphMI), which aims to extract private graph data of the training graph by inverting GNN.
Specifically, we propose a projected gradient module to tackle the discreteness of graph edges while preserving the sparsity and smoothness of graph features.
We design a graph auto-encoder module to efficiently exploit graph topology, node attributes, and target model parameters for edge inference.
arXiv Detail & Related papers (2021-06-05T07:07:52Z) - Communication-Computation Efficient Secure Aggregation for Federated
Learning [23.924656276456503]
Federated learning is a way to train neural networks using data distributed over multiple nodes without the need for the nodes to share data.
A recent solution based on the secure aggregation primitive enabled privacy-preserving federated learning, but at the expense of significant extra communication/computational resources.
We propose communication-computation efficient secure aggregation which substantially reduces the amount of communication/computational resources.
arXiv Detail & Related papers (2020-12-10T03:17:50Z) - Secure Deep Graph Generation with Link Differential Privacy [32.671503863933616]
We leverage the differential privacy (DP) framework to formulate and enforce rigorous privacy constraints on deep graph generation models.
In particular, we enforce edge-DP by injecting proper noise to the gradients of a link reconstruction-based graph generation model.
Our proposed DPGGAN model is able to generate graphs with effectively preserved global structure and rigorously protected individual link privacy.
arXiv Detail & Related papers (2020-05-01T15:49:17Z) - Privacy-preserving Traffic Flow Prediction: A Federated Learning
Approach [61.64006416975458]
We propose a privacy-preserving machine learning technique named Federated Learning-based Gated Recurrent Unit neural network algorithm (FedGRU) for traffic flow prediction.
FedGRU differs from current centralized learning methods and updates universal learning models through a secure parameter aggregation mechanism.
It is shown that FedGRU's prediction accuracy is 90.96% higher than the advanced deep learning models.
arXiv Detail & Related papers (2020-03-19T13:07:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.